2023-07-14 04:15:41,823 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b 2023-07-14 04:15:41,840 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-14 04:15:41,856 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-14 04:15:41,856 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e, deleteOnExit=true 2023-07-14 04:15:41,857 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-14 04:15:41,857 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/test.cache.data in system properties and HBase conf 2023-07-14 04:15:41,858 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.tmp.dir in system properties and HBase conf 2023-07-14 04:15:41,858 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir in system properties and HBase conf 2023-07-14 04:15:41,859 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-14 04:15:41,859 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-14 04:15:41,859 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-14 04:15:41,984 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-14 04:15:42,392 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-14 04:15:42,396 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-14 04:15:42,397 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-14 04:15:42,397 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-14 04:15:42,397 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 04:15:42,398 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-14 04:15:42,398 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-14 04:15:42,398 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 04:15:42,399 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 04:15:42,399 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-14 04:15:42,399 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/nfs.dump.dir in system properties and HBase conf 2023-07-14 04:15:42,399 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir in system properties and HBase conf 2023-07-14 04:15:42,400 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 04:15:42,400 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-14 04:15:42,400 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-14 04:15:42,946 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 04:15:42,950 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 04:15:43,262 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-14 04:15:43,440 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-14 04:15:43,455 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:15:43,487 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:15:43,524 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/Jetty_localhost_33317_hdfs____.wqticd/webapp 2023-07-14 04:15:43,684 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33317 2023-07-14 04:15:43,695 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 04:15:43,696 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 04:15:44,200 WARN [Listener at localhost/33983] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:15:44,269 WARN [Listener at localhost/33983] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:15:44,292 WARN [Listener at localhost/33983] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:15:44,300 INFO [Listener at localhost/33983] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:15:44,308 INFO [Listener at localhost/33983] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/Jetty_localhost_40611_datanode____7ggq81/webapp 2023-07-14 04:15:44,435 INFO [Listener at localhost/33983] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40611 2023-07-14 04:15:44,856 WARN [Listener at localhost/45863] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:15:44,871 WARN [Listener at localhost/45863] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:15:44,876 WARN [Listener at localhost/45863] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:15:44,878 INFO [Listener at localhost/45863] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:15:44,886 INFO [Listener at localhost/45863] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/Jetty_localhost_46599_datanode____.vs2ibq/webapp 2023-07-14 04:15:45,020 INFO [Listener at localhost/45863] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46599 2023-07-14 04:15:45,042 WARN [Listener at localhost/37267] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:15:45,065 WARN [Listener at localhost/37267] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:15:45,073 WARN [Listener at localhost/37267] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:15:45,076 INFO [Listener at localhost/37267] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:15:45,084 INFO [Listener at localhost/37267] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/Jetty_localhost_39967_datanode____.g0724l/webapp 2023-07-14 04:15:45,238 INFO [Listener at localhost/37267] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39967 2023-07-14 04:15:45,258 WARN [Listener at localhost/46681] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:15:45,449 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3ee8558d49a78d5: Processing first storage report for DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34 from datanode c5b2c7e4-deea-42ee-99fd-89a518b7f806 2023-07-14 04:15:45,451 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3ee8558d49a78d5: from storage DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34 node DatanodeRegistration(127.0.0.1:33565, datanodeUuid=c5b2c7e4-deea-42ee-99fd-89a518b7f806, infoPort=44523, infoSecurePort=0, ipcPort=45863, storageInfo=lv=-57;cid=testClusterID;nsid=1271584968;c=1689308143026), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-14 04:15:45,451 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3b4a00c755c59f6a: Processing first storage report for DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218 from datanode 0adb0faa-8114-41ff-822f-9596167e2e37 2023-07-14 04:15:45,451 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3b4a00c755c59f6a: from storage DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218 node DatanodeRegistration(127.0.0.1:43633, datanodeUuid=0adb0faa-8114-41ff-822f-9596167e2e37, infoPort=43445, infoSecurePort=0, ipcPort=37267, storageInfo=lv=-57;cid=testClusterID;nsid=1271584968;c=1689308143026), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:15:45,451 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x44adab39e6262491: Processing first storage report for DS-d5641973-e14e-4459-8879-1e0f49f3a25f from datanode 916a5ea7-e13f-42c6-b6ee-79eaac5e19ab 2023-07-14 04:15:45,451 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x44adab39e6262491: from storage DS-d5641973-e14e-4459-8879-1e0f49f3a25f node DatanodeRegistration(127.0.0.1:39385, datanodeUuid=916a5ea7-e13f-42c6-b6ee-79eaac5e19ab, infoPort=33837, infoSecurePort=0, ipcPort=46681, storageInfo=lv=-57;cid=testClusterID;nsid=1271584968;c=1689308143026), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:15:45,451 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3ee8558d49a78d5: Processing first storage report for DS-5f8f5152-3530-46c2-bf8a-19272ce21c33 from datanode c5b2c7e4-deea-42ee-99fd-89a518b7f806 2023-07-14 04:15:45,452 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3ee8558d49a78d5: from storage DS-5f8f5152-3530-46c2-bf8a-19272ce21c33 node DatanodeRegistration(127.0.0.1:33565, datanodeUuid=c5b2c7e4-deea-42ee-99fd-89a518b7f806, infoPort=44523, infoSecurePort=0, ipcPort=45863, storageInfo=lv=-57;cid=testClusterID;nsid=1271584968;c=1689308143026), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:15:45,452 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3b4a00c755c59f6a: Processing first storage report for DS-84cd6ff1-a1da-4f2b-8c0b-424f58900619 from datanode 0adb0faa-8114-41ff-822f-9596167e2e37 2023-07-14 04:15:45,452 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3b4a00c755c59f6a: from storage DS-84cd6ff1-a1da-4f2b-8c0b-424f58900619 node DatanodeRegistration(127.0.0.1:43633, datanodeUuid=0adb0faa-8114-41ff-822f-9596167e2e37, infoPort=43445, infoSecurePort=0, ipcPort=37267, storageInfo=lv=-57;cid=testClusterID;nsid=1271584968;c=1689308143026), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:15:45,452 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x44adab39e6262491: Processing first storage report for DS-7bd3cde2-0eb6-4fbc-9ee6-90b0ba4dc1a2 from datanode 916a5ea7-e13f-42c6-b6ee-79eaac5e19ab 2023-07-14 04:15:45,452 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x44adab39e6262491: from storage DS-7bd3cde2-0eb6-4fbc-9ee6-90b0ba4dc1a2 node DatanodeRegistration(127.0.0.1:39385, datanodeUuid=916a5ea7-e13f-42c6-b6ee-79eaac5e19ab, infoPort=33837, infoSecurePort=0, ipcPort=46681, storageInfo=lv=-57;cid=testClusterID;nsid=1271584968;c=1689308143026), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:15:45,695 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b 2023-07-14 04:15:45,763 INFO [Listener at localhost/46681] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/zookeeper_0, clientPort=56534, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-14 04:15:45,777 INFO [Listener at localhost/46681] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56534 2023-07-14 04:15:45,787 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:45,789 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:46,465 INFO [Listener at localhost/46681] util.FSUtils(471): Created version file at hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4 with version=8 2023-07-14 04:15:46,465 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/hbase-staging 2023-07-14 04:15:46,475 DEBUG [Listener at localhost/46681] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-14 04:15:46,475 DEBUG [Listener at localhost/46681] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-14 04:15:46,475 DEBUG [Listener at localhost/46681] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-14 04:15:46,475 DEBUG [Listener at localhost/46681] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-14 04:15:46,815 INFO [Listener at localhost/46681] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-14 04:15:47,487 INFO [Listener at localhost/46681] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:15:47,524 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:47,525 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:47,525 INFO [Listener at localhost/46681] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:15:47,525 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:47,525 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:15:47,737 INFO [Listener at localhost/46681] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:15:47,839 DEBUG [Listener at localhost/46681] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-14 04:15:47,967 INFO [Listener at localhost/46681] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34797 2023-07-14 04:15:47,983 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:47,986 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:48,015 INFO [Listener at localhost/46681] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34797 connecting to ZooKeeper ensemble=127.0.0.1:56534 2023-07-14 04:15:48,062 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:347970x0, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:15:48,064 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34797-0x101620b2b570000 connected 2023-07-14 04:15:48,112 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:15:48,113 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:15:48,117 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:15:48,127 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34797 2023-07-14 04:15:48,128 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34797 2023-07-14 04:15:48,129 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34797 2023-07-14 04:15:48,130 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34797 2023-07-14 04:15:48,130 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34797 2023-07-14 04:15:48,163 INFO [Listener at localhost/46681] log.Log(170): Logging initialized @7040ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-14 04:15:48,290 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:15:48,291 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:15:48,292 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:15:48,294 INFO [Listener at localhost/46681] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-14 04:15:48,294 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:15:48,294 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:15:48,298 INFO [Listener at localhost/46681] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:15:48,362 INFO [Listener at localhost/46681] http.HttpServer(1146): Jetty bound to port 37465 2023-07-14 04:15:48,364 INFO [Listener at localhost/46681] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:15:48,393 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:48,397 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38bddd36{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:15:48,397 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:48,397 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@680fffdc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:15:48,573 INFO [Listener at localhost/46681] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:15:48,586 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:15:48,586 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:15:48,589 INFO [Listener at localhost/46681] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 04:15:48,598 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:48,624 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@38aa31da{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/jetty-0_0_0_0-37465-hbase-server-2_4_18-SNAPSHOT_jar-_-any-335191846855067056/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-14 04:15:48,636 INFO [Listener at localhost/46681] server.AbstractConnector(333): Started ServerConnector@7e43481b{HTTP/1.1, (http/1.1)}{0.0.0.0:37465} 2023-07-14 04:15:48,636 INFO [Listener at localhost/46681] server.Server(415): Started @7513ms 2023-07-14 04:15:48,639 INFO [Listener at localhost/46681] master.HMaster(444): hbase.rootdir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4, hbase.cluster.distributed=false 2023-07-14 04:15:48,722 INFO [Listener at localhost/46681] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:15:48,723 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:48,723 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:48,723 INFO [Listener at localhost/46681] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:15:48,723 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:48,723 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:15:48,729 INFO [Listener at localhost/46681] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:15:48,732 INFO [Listener at localhost/46681] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34609 2023-07-14 04:15:48,734 INFO [Listener at localhost/46681] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:15:48,741 DEBUG [Listener at localhost/46681] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:15:48,742 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:48,744 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:48,746 INFO [Listener at localhost/46681] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34609 connecting to ZooKeeper ensemble=127.0.0.1:56534 2023-07-14 04:15:48,751 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:346090x0, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:15:48,752 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34609-0x101620b2b570001 connected 2023-07-14 04:15:48,752 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:15:48,754 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:15:48,754 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:15:48,755 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34609 2023-07-14 04:15:48,755 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34609 2023-07-14 04:15:48,756 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34609 2023-07-14 04:15:48,756 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34609 2023-07-14 04:15:48,756 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34609 2023-07-14 04:15:48,759 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:15:48,759 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:15:48,759 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:15:48,760 INFO [Listener at localhost/46681] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:15:48,760 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:15:48,760 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:15:48,761 INFO [Listener at localhost/46681] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:15:48,762 INFO [Listener at localhost/46681] http.HttpServer(1146): Jetty bound to port 46165 2023-07-14 04:15:48,763 INFO [Listener at localhost/46681] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:15:48,764 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:48,765 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a6a072{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:15:48,765 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:48,765 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c6f9d30{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:15:48,888 INFO [Listener at localhost/46681] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:15:48,890 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:15:48,891 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:15:48,891 INFO [Listener at localhost/46681] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 04:15:48,892 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:48,896 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@720179eb{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/jetty-0_0_0_0-46165-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5020245885743277613/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:15:48,897 INFO [Listener at localhost/46681] server.AbstractConnector(333): Started ServerConnector@4aa89d1f{HTTP/1.1, (http/1.1)}{0.0.0.0:46165} 2023-07-14 04:15:48,897 INFO [Listener at localhost/46681] server.Server(415): Started @7774ms 2023-07-14 04:15:48,911 INFO [Listener at localhost/46681] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:15:48,911 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:48,911 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:48,912 INFO [Listener at localhost/46681] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:15:48,912 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:48,912 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:15:48,912 INFO [Listener at localhost/46681] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:15:48,914 INFO [Listener at localhost/46681] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33827 2023-07-14 04:15:48,915 INFO [Listener at localhost/46681] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:15:48,917 DEBUG [Listener at localhost/46681] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:15:48,918 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:48,942 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:48,944 INFO [Listener at localhost/46681] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33827 connecting to ZooKeeper ensemble=127.0.0.1:56534 2023-07-14 04:15:48,982 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:338270x0, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:15:48,984 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:338270x0, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:15:48,985 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:338270x0, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:15:48,986 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33827-0x101620b2b570002 connected 2023-07-14 04:15:48,987 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:15:49,008 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33827 2023-07-14 04:15:49,009 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33827 2023-07-14 04:15:49,010 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33827 2023-07-14 04:15:49,013 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33827 2023-07-14 04:15:49,022 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33827 2023-07-14 04:15:49,025 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:15:49,026 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:15:49,026 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:15:49,027 INFO [Listener at localhost/46681] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:15:49,027 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:15:49,027 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:15:49,027 INFO [Listener at localhost/46681] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:15:49,028 INFO [Listener at localhost/46681] http.HttpServer(1146): Jetty bound to port 45855 2023-07-14 04:15:49,028 INFO [Listener at localhost/46681] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:15:49,041 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:49,042 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ec386b4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:15:49,043 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:49,043 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64f476b1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:15:49,176 INFO [Listener at localhost/46681] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:15:49,177 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:15:49,177 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:15:49,177 INFO [Listener at localhost/46681] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 04:15:49,178 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:49,179 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2c96a714{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/jetty-0_0_0_0-45855-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6659639246902165539/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:15:49,180 INFO [Listener at localhost/46681] server.AbstractConnector(333): Started ServerConnector@74bbd774{HTTP/1.1, (http/1.1)}{0.0.0.0:45855} 2023-07-14 04:15:49,181 INFO [Listener at localhost/46681] server.Server(415): Started @8058ms 2023-07-14 04:15:49,193 INFO [Listener at localhost/46681] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:15:49,193 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:49,193 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:49,193 INFO [Listener at localhost/46681] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:15:49,193 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:49,193 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:15:49,194 INFO [Listener at localhost/46681] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:15:49,195 INFO [Listener at localhost/46681] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34763 2023-07-14 04:15:49,196 INFO [Listener at localhost/46681] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:15:49,197 DEBUG [Listener at localhost/46681] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:15:49,198 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:49,200 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:49,201 INFO [Listener at localhost/46681] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34763 connecting to ZooKeeper ensemble=127.0.0.1:56534 2023-07-14 04:15:49,205 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:347630x0, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:15:49,207 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34763-0x101620b2b570003 connected 2023-07-14 04:15:49,207 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:15:49,207 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:15:49,208 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:15:49,211 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34763 2023-07-14 04:15:49,213 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34763 2023-07-14 04:15:49,216 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34763 2023-07-14 04:15:49,216 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34763 2023-07-14 04:15:49,217 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34763 2023-07-14 04:15:49,219 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:15:49,219 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:15:49,219 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:15:49,220 INFO [Listener at localhost/46681] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:15:49,220 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:15:49,220 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:15:49,220 INFO [Listener at localhost/46681] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:15:49,221 INFO [Listener at localhost/46681] http.HttpServer(1146): Jetty bound to port 46589 2023-07-14 04:15:49,221 INFO [Listener at localhost/46681] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:15:49,227 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:49,227 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5526bfb1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:15:49,228 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:49,228 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@145f3cb8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:15:49,354 INFO [Listener at localhost/46681] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:15:49,355 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:15:49,355 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:15:49,356 INFO [Listener at localhost/46681] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 04:15:49,357 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:49,358 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5965958b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/jetty-0_0_0_0-46589-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6515665090450308542/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:15:49,359 INFO [Listener at localhost/46681] server.AbstractConnector(333): Started ServerConnector@5f48ab6c{HTTP/1.1, (http/1.1)}{0.0.0.0:46589} 2023-07-14 04:15:49,360 INFO [Listener at localhost/46681] server.Server(415): Started @8237ms 2023-07-14 04:15:49,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:15:49,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5812bd08{HTTP/1.1, (http/1.1)}{0.0.0.0:43191} 2023-07-14 04:15:49,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8254ms 2023-07-14 04:15:49,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:15:49,391 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 04:15:49,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:15:49,414 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:15:49,414 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:15:49,414 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:15:49,414 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:15:49,416 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:49,420 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 04:15:49,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34797,1689308146653 from backup master directory 2023-07-14 04:15:49,422 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 04:15:49,434 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:15:49,434 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 04:15:49,435 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:15:49,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:15:49,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-14 04:15:49,443 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-14 04:15:49,567 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/hbase.id with ID: 48ece5fe-c3d3-403c-8aa1-91b39ba284f0 2023-07-14 04:15:49,613 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:49,636 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:49,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x685da169 to 127.0.0.1:56534 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:15:49,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38ce54d0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:15:49,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:15:49,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-14 04:15:49,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-14 04:15:49,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-14 04:15:49,814 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-14 04:15:49,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-14 04:15:49,821 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:15:49,862 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store-tmp 2023-07-14 04:15:49,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:49,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 04:15:49,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:15:49,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:15:49,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 04:15:49,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:15:49,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:15:49,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:15:49,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/WALs/jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:15:49,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34797%2C1689308146653, suffix=, logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/WALs/jenkins-hbase4.apache.org,34797,1689308146653, archiveDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/oldWALs, maxLogs=10 2023-07-14 04:15:50,007 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK] 2023-07-14 04:15:50,007 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK] 2023-07-14 04:15:50,007 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK] 2023-07-14 04:15:50,016 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-14 04:15:50,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/WALs/jenkins-hbase4.apache.org,34797,1689308146653/jenkins-hbase4.apache.org%2C34797%2C1689308146653.1689308149947 2023-07-14 04:15:50,092 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK], DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK], DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK]] 2023-07-14 04:15:50,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:15:50,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:50,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:15:50,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:15:50,185 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:15:50,195 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-14 04:15:50,229 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-14 04:15:50,243 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:50,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:15:50,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:15:50,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:15:50,274 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:50,276 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10754338080, jitterRate=0.0015757828950881958}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:50,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:15:50,277 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-14 04:15:50,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-14 04:15:50,301 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-14 04:15:50,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-14 04:15:50,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-14 04:15:50,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 42 msec 2023-07-14 04:15:50,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-14 04:15:50,386 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-14 04:15:50,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-14 04:15:50,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-14 04:15:50,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-14 04:15:50,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-14 04:15:50,414 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:50,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-14 04:15:50,416 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-14 04:15:50,433 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-14 04:15:50,438 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:15:50,439 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:15:50,439 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:15:50,439 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:50,439 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:15:50,439 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34797,1689308146653, sessionid=0x101620b2b570000, setting cluster-up flag (Was=false) 2023-07-14 04:15:50,464 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:50,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-14 04:15:50,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:15:50,479 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:50,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-14 04:15:50,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:15:50,489 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.hbase-snapshot/.tmp 2023-07-14 04:15:50,564 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(951): ClusterId : 48ece5fe-c3d3-403c-8aa1-91b39ba284f0 2023-07-14 04:15:50,564 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(951): ClusterId : 48ece5fe-c3d3-403c-8aa1-91b39ba284f0 2023-07-14 04:15:50,565 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(951): ClusterId : 48ece5fe-c3d3-403c-8aa1-91b39ba284f0 2023-07-14 04:15:50,576 DEBUG [RS:1;jenkins-hbase4:33827] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:15:50,576 DEBUG [RS:0;jenkins-hbase4:34609] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:15:50,576 DEBUG [RS:2;jenkins-hbase4:34763] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:15:50,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-14 04:15:50,584 DEBUG [RS:1;jenkins-hbase4:33827] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:15:50,584 DEBUG [RS:2;jenkins-hbase4:34763] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:15:50,584 DEBUG [RS:0;jenkins-hbase4:34609] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:15:50,584 DEBUG [RS:2;jenkins-hbase4:34763] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:15:50,584 DEBUG [RS:1;jenkins-hbase4:33827] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:15:50,584 DEBUG [RS:0;jenkins-hbase4:34609] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:15:50,589 DEBUG [RS:1;jenkins-hbase4:33827] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:15:50,589 DEBUG [RS:0;jenkins-hbase4:34609] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:15:50,589 DEBUG [RS:2;jenkins-hbase4:34763] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:15:50,592 DEBUG [RS:1;jenkins-hbase4:33827] zookeeper.ReadOnlyZKClient(139): Connect 0x4d3fc16c to 127.0.0.1:56534 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:15:50,592 DEBUG [RS:2;jenkins-hbase4:34763] zookeeper.ReadOnlyZKClient(139): Connect 0x0b89a956 to 127.0.0.1:56534 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:15:50,592 DEBUG [RS:0;jenkins-hbase4:34609] zookeeper.ReadOnlyZKClient(139): Connect 0x4339e289 to 127.0.0.1:56534 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:15:50,604 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-14 04:15:50,608 DEBUG [RS:2;jenkins-hbase4:34763] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e8b2643, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:15:50,608 DEBUG [RS:1;jenkins-hbase4:33827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49f720f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:15:50,610 DEBUG [RS:0;jenkins-hbase4:34609] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f8fd7e5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:15:50,610 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:15:50,611 DEBUG [RS:1;jenkins-hbase4:33827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5bc3c0c8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:15:50,611 DEBUG [RS:0;jenkins-hbase4:34609] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b771664, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:15:50,611 DEBUG [RS:2;jenkins-hbase4:34763] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41e4bd46, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:15:50,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-14 04:15:50,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-14 04:15:50,641 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:34763 2023-07-14 04:15:50,644 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33827 2023-07-14 04:15:50,644 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34609 2023-07-14 04:15:50,648 INFO [RS:0;jenkins-hbase4:34609] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:15:50,648 INFO [RS:2;jenkins-hbase4:34763] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:15:50,650 INFO [RS:2;jenkins-hbase4:34763] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:15:50,650 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:15:50,649 INFO [RS:1;jenkins-hbase4:33827] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:15:50,651 INFO [RS:1;jenkins-hbase4:33827] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:15:50,650 INFO [RS:0;jenkins-hbase4:34609] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:15:50,651 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:15:50,652 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:15:50,654 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:33827, startcode=1689308148910 2023-07-14 04:15:50,654 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:34763, startcode=1689308149192 2023-07-14 04:15:50,654 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:34609, startcode=1689308148721 2023-07-14 04:15:50,679 DEBUG [RS:2;jenkins-hbase4:34763] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:15:50,679 DEBUG [RS:1;jenkins-hbase4:33827] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:15:50,679 DEBUG [RS:0;jenkins-hbase4:34609] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:15:50,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-14 04:15:50,783 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54081, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:15:50,791 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46909, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:15:50,791 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40257, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:15:50,802 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:50,818 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:50,821 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:50,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 04:15:50,857 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 04:15:50,858 WARN [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-14 04:15:50,858 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 04:15:50,858 WARN [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-14 04:15:50,858 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 04:15:50,858 WARN [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-14 04:15:50,863 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 04:15:50,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 04:15:50,866 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 04:15:50,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:15:50,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:15:50,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:15:50,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:15:50,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-14 04:15:50,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:50,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:15:50,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:50,897 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689308180897 2023-07-14 04:15:50,901 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-14 04:15:50,906 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-14 04:15:50,912 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 04:15:50,918 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-14 04:15:50,923 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-14 04:15:50,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-14 04:15:50,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-14 04:15:50,924 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-14 04:15:50,927 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 04:15:50,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:50,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-14 04:15:50,938 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-14 04:15:50,938 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-14 04:15:50,951 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-14 04:15:50,951 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-14 04:15:50,960 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:33827, startcode=1689308148910 2023-07-14 04:15:50,962 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308150954,5,FailOnTimeoutGroup] 2023-07-14 04:15:50,960 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:34763, startcode=1689308149192 2023-07-14 04:15:50,960 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:34609, startcode=1689308148721 2023-07-14 04:15:50,967 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:50,967 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:50,969 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:50,975 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308150966,5,FailOnTimeoutGroup] 2023-07-14 04:15:50,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:50,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-14 04:15:50,977 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:50,978 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:50,980 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 04:15:50,980 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 04:15:50,981 WARN [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-14 04:15:50,985 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(2830): Master is not running yet 2023-07-14 04:15:50,981 WARN [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-14 04:15:50,986 WARN [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-14 04:15:51,067 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 04:15:51,068 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 04:15:51,069 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4 2023-07-14 04:15:51,133 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:51,136 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 04:15:51,139 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info 2023-07-14 04:15:51,140 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 04:15:51,142 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:51,142 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 04:15:51,146 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:15:51,147 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 04:15:51,148 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:51,148 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 04:15:51,152 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table 2023-07-14 04:15:51,153 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 04:15:51,164 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:51,167 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740 2023-07-14 04:15:51,168 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740 2023-07-14 04:15:51,173 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 04:15:51,176 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 04:15:51,180 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:51,182 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11426566240, jitterRate=0.06418190896511078}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 04:15:51,182 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 04:15:51,182 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 04:15:51,183 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 04:15:51,183 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 04:15:51,183 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 04:15:51,183 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 04:15:51,186 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:33827, startcode=1689308148910 2023-07-14 04:15:51,187 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 04:15:51,187 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 04:15:51,187 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:34609, startcode=1689308148721 2023-07-14 04:15:51,194 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:51,194 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:34763, startcode=1689308149192 2023-07-14 04:15:51,196 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:15:51,198 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-14 04:15:51,198 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 04:15:51,199 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-14 04:15:51,204 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:51,204 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:15:51,204 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-14 04:15:51,211 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4 2023-07-14 04:15:51,211 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33983 2023-07-14 04:15:51,211 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37465 2023-07-14 04:15:51,213 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,213 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:15:51,214 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-14 04:15:51,218 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4 2023-07-14 04:15:51,219 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33983 2023-07-14 04:15:51,219 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37465 2023-07-14 04:15:51,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-14 04:15:51,222 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:15:51,223 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4 2023-07-14 04:15:51,223 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33983 2023-07-14 04:15:51,223 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37465 2023-07-14 04:15:51,234 DEBUG [RS:1;jenkins-hbase4:33827] zookeeper.ZKUtil(162): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:51,235 WARN [RS:1;jenkins-hbase4:33827] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:15:51,235 DEBUG [RS:0;jenkins-hbase4:34609] zookeeper.ZKUtil(162): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:51,235 WARN [RS:0;jenkins-hbase4:34609] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:15:51,236 INFO [RS:0;jenkins-hbase4:34609] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:15:51,236 DEBUG [RS:2;jenkins-hbase4:34763] zookeeper.ZKUtil(162): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,236 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:51,237 WARN [RS:2;jenkins-hbase4:34763] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:15:51,237 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33827,1689308148910] 2023-07-14 04:15:51,237 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34609,1689308148721] 2023-07-14 04:15:51,237 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34763,1689308149192] 2023-07-14 04:15:51,237 INFO [RS:2;jenkins-hbase4:34763] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:15:51,238 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,238 INFO [RS:1;jenkins-hbase4:33827] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:15:51,238 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:51,256 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-14 04:15:51,270 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-14 04:15:51,276 DEBUG [RS:2;jenkins-hbase4:34763] zookeeper.ZKUtil(162): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:51,276 DEBUG [RS:0;jenkins-hbase4:34609] zookeeper.ZKUtil(162): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:51,276 DEBUG [RS:1;jenkins-hbase4:33827] zookeeper.ZKUtil(162): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:51,277 DEBUG [RS:2;jenkins-hbase4:34763] zookeeper.ZKUtil(162): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:51,277 DEBUG [RS:0;jenkins-hbase4:34609] zookeeper.ZKUtil(162): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:51,277 DEBUG [RS:1;jenkins-hbase4:33827] zookeeper.ZKUtil(162): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:51,277 DEBUG [RS:2;jenkins-hbase4:34763] zookeeper.ZKUtil(162): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,277 DEBUG [RS:0;jenkins-hbase4:34609] zookeeper.ZKUtil(162): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,277 DEBUG [RS:1;jenkins-hbase4:33827] zookeeper.ZKUtil(162): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,290 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:15:51,290 DEBUG [RS:0;jenkins-hbase4:34609] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:15:51,290 DEBUG [RS:1;jenkins-hbase4:33827] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:15:51,302 INFO [RS:2;jenkins-hbase4:34763] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:15:51,302 INFO [RS:1;jenkins-hbase4:33827] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:15:51,302 INFO [RS:0;jenkins-hbase4:34609] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:15:51,328 INFO [RS:2;jenkins-hbase4:34763] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:15:51,328 INFO [RS:0;jenkins-hbase4:34609] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:15:51,328 INFO [RS:1;jenkins-hbase4:33827] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:15:51,334 INFO [RS:2;jenkins-hbase4:34763] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:15:51,334 INFO [RS:0;jenkins-hbase4:34609] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:15:51,334 INFO [RS:1;jenkins-hbase4:33827] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:15:51,334 INFO [RS:2;jenkins-hbase4:34763] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,335 INFO [RS:0;jenkins-hbase4:34609] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,335 INFO [RS:1;jenkins-hbase4:33827] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,336 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:15:51,336 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:15:51,336 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:15:51,345 INFO [RS:2;jenkins-hbase4:34763] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,345 INFO [RS:1;jenkins-hbase4:33827] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,345 INFO [RS:0;jenkins-hbase4:34609] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,345 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,345 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,345 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,345 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:15:51,346 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,346 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,347 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,347 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:15:51,347 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,347 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,347 DEBUG [RS:2;jenkins-hbase4:34763] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,347 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:15:51,347 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,347 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,348 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,348 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,348 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,348 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,348 DEBUG [RS:1;jenkins-hbase4:33827] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,348 DEBUG [RS:0;jenkins-hbase4:34609] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:51,349 INFO [RS:2;jenkins-hbase4:34763] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,349 INFO [RS:2;jenkins-hbase4:34763] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,350 INFO [RS:0;jenkins-hbase4:34609] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,351 INFO [RS:2;jenkins-hbase4:34763] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,351 INFO [RS:0;jenkins-hbase4:34609] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,351 INFO [RS:0;jenkins-hbase4:34609] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,351 INFO [RS:1;jenkins-hbase4:33827] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,351 INFO [RS:1;jenkins-hbase4:33827] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,351 INFO [RS:1;jenkins-hbase4:33827] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,369 INFO [RS:0;jenkins-hbase4:34609] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:15:51,369 INFO [RS:1;jenkins-hbase4:33827] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:15:51,369 INFO [RS:2;jenkins-hbase4:34763] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:15:51,374 INFO [RS:0;jenkins-hbase4:34609] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34609,1689308148721-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,374 INFO [RS:1;jenkins-hbase4:33827] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33827,1689308148910-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,374 INFO [RS:2;jenkins-hbase4:34763] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34763,1689308149192-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,394 INFO [RS:2;jenkins-hbase4:34763] regionserver.Replication(203): jenkins-hbase4.apache.org,34763,1689308149192 started 2023-07-14 04:15:51,394 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34763,1689308149192, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34763, sessionid=0x101620b2b570003 2023-07-14 04:15:51,395 INFO [RS:1;jenkins-hbase4:33827] regionserver.Replication(203): jenkins-hbase4.apache.org,33827,1689308148910 started 2023-07-14 04:15:51,395 DEBUG [RS:2;jenkins-hbase4:34763] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:15:51,395 DEBUG [RS:2;jenkins-hbase4:34763] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,395 DEBUG [RS:2;jenkins-hbase4:34763] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34763,1689308149192' 2023-07-14 04:15:51,395 DEBUG [RS:2;jenkins-hbase4:34763] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:15:51,395 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33827,1689308148910, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33827, sessionid=0x101620b2b570002 2023-07-14 04:15:51,396 DEBUG [RS:1;jenkins-hbase4:33827] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:15:51,396 DEBUG [RS:1;jenkins-hbase4:33827] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:51,396 DEBUG [RS:1;jenkins-hbase4:33827] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33827,1689308148910' 2023-07-14 04:15:51,396 DEBUG [RS:1;jenkins-hbase4:33827] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:15:51,396 DEBUG [RS:2;jenkins-hbase4:34763] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:15:51,397 DEBUG [RS:1;jenkins-hbase4:33827] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:15:51,397 DEBUG [RS:2;jenkins-hbase4:34763] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:15:51,397 DEBUG [RS:2;jenkins-hbase4:34763] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:15:51,397 DEBUG [RS:2;jenkins-hbase4:34763] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,397 DEBUG [RS:2;jenkins-hbase4:34763] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34763,1689308149192' 2023-07-14 04:15:51,397 DEBUG [RS:2;jenkins-hbase4:34763] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:15:51,397 DEBUG [RS:1;jenkins-hbase4:33827] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:15:51,397 DEBUG [RS:1;jenkins-hbase4:33827] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:15:51,397 DEBUG [RS:1;jenkins-hbase4:33827] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:51,397 DEBUG [RS:1;jenkins-hbase4:33827] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33827,1689308148910' 2023-07-14 04:15:51,397 DEBUG [RS:1;jenkins-hbase4:33827] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:15:51,398 DEBUG [RS:2;jenkins-hbase4:34763] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:15:51,398 DEBUG [RS:1;jenkins-hbase4:33827] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:15:51,398 INFO [RS:0;jenkins-hbase4:34609] regionserver.Replication(203): jenkins-hbase4.apache.org,34609,1689308148721 started 2023-07-14 04:15:51,398 DEBUG [RS:2;jenkins-hbase4:34763] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:15:51,399 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34609,1689308148721, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34609, sessionid=0x101620b2b570001 2023-07-14 04:15:51,399 INFO [RS:2;jenkins-hbase4:34763] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 04:15:51,399 DEBUG [RS:0;jenkins-hbase4:34609] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:15:51,399 DEBUG [RS:1;jenkins-hbase4:33827] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:15:51,399 DEBUG [RS:0;jenkins-hbase4:34609] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:51,399 DEBUG [RS:0;jenkins-hbase4:34609] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34609,1689308148721' 2023-07-14 04:15:51,399 INFO [RS:2;jenkins-hbase4:34763] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 04:15:51,399 DEBUG [RS:0;jenkins-hbase4:34609] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:15:51,399 INFO [RS:1;jenkins-hbase4:33827] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 04:15:51,399 INFO [RS:1;jenkins-hbase4:33827] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 04:15:51,400 DEBUG [RS:0;jenkins-hbase4:34609] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:15:51,400 DEBUG [RS:0;jenkins-hbase4:34609] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:15:51,400 DEBUG [RS:0;jenkins-hbase4:34609] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:15:51,400 DEBUG [RS:0;jenkins-hbase4:34609] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:51,400 DEBUG [RS:0;jenkins-hbase4:34609] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34609,1689308148721' 2023-07-14 04:15:51,400 DEBUG [RS:0;jenkins-hbase4:34609] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:15:51,401 DEBUG [RS:0;jenkins-hbase4:34609] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:15:51,401 DEBUG [RS:0;jenkins-hbase4:34609] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:15:51,401 INFO [RS:0;jenkins-hbase4:34609] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 04:15:51,401 INFO [RS:0;jenkins-hbase4:34609] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 04:15:51,422 DEBUG [jenkins-hbase4:34797] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-14 04:15:51,489 DEBUG [jenkins-hbase4:34797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:51,491 DEBUG [jenkins-hbase4:34797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:51,491 DEBUG [jenkins-hbase4:34797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:51,491 DEBUG [jenkins-hbase4:34797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:15:51,491 DEBUG [jenkins-hbase4:34797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:51,494 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34763,1689308149192, state=OPENING 2023-07-14 04:15:51,502 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-14 04:15:51,504 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:51,505 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:15:51,508 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:51,511 INFO [RS:0;jenkins-hbase4:34609] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34609%2C1689308148721, suffix=, logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,34609,1689308148721, archiveDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs, maxLogs=32 2023-07-14 04:15:51,511 INFO [RS:2;jenkins-hbase4:34763] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34763%2C1689308149192, suffix=, logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,34763,1689308149192, archiveDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs, maxLogs=32 2023-07-14 04:15:51,511 INFO [RS:1;jenkins-hbase4:33827] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33827%2C1689308148910, suffix=, logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,33827,1689308148910, archiveDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs, maxLogs=32 2023-07-14 04:15:51,545 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK] 2023-07-14 04:15:51,545 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK] 2023-07-14 04:15:51,545 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK] 2023-07-14 04:15:51,548 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK] 2023-07-14 04:15:51,549 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK] 2023-07-14 04:15:51,549 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK] 2023-07-14 04:15:51,563 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK] 2023-07-14 04:15:51,563 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK] 2023-07-14 04:15:51,563 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK] 2023-07-14 04:15:51,573 INFO [RS:1;jenkins-hbase4:33827] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,33827,1689308148910/jenkins-hbase4.apache.org%2C33827%2C1689308148910.1689308151515 2023-07-14 04:15:51,573 INFO [RS:2;jenkins-hbase4:34763] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,34763,1689308149192/jenkins-hbase4.apache.org%2C34763%2C1689308149192.1689308151515 2023-07-14 04:15:51,574 DEBUG [RS:1;jenkins-hbase4:33827] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK], DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK], DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK]] 2023-07-14 04:15:51,574 DEBUG [RS:2;jenkins-hbase4:34763] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK], DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK], DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK]] 2023-07-14 04:15:51,576 INFO [RS:0;jenkins-hbase4:34609] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,34609,1689308148721/jenkins-hbase4.apache.org%2C34609%2C1689308148721.1689308151515 2023-07-14 04:15:51,576 DEBUG [RS:0;jenkins-hbase4:34609] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK], DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK], DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK]] 2023-07-14 04:15:51,691 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:51,693 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:15:51,696 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33584, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:15:51,712 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 04:15:51,713 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:15:51,716 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34763%2C1689308149192.meta, suffix=.meta, logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,34763,1689308149192, archiveDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs, maxLogs=32 2023-07-14 04:15:51,739 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK] 2023-07-14 04:15:51,740 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK] 2023-07-14 04:15:51,740 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK] 2023-07-14 04:15:51,746 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,34763,1689308149192/jenkins-hbase4.apache.org%2C34763%2C1689308149192.meta.1689308151718.meta 2023-07-14 04:15:51,747 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK], DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK], DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK]] 2023-07-14 04:15:51,747 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:15:51,749 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 04:15:51,751 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 04:15:51,753 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 04:15:51,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 04:15:51,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:51,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 04:15:51,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 04:15:51,761 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 04:15:51,763 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info 2023-07-14 04:15:51,763 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info 2023-07-14 04:15:51,764 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 04:15:51,764 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:51,765 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 04:15:51,766 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:15:51,766 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:15:51,766 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 04:15:51,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:51,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 04:15:51,769 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table 2023-07-14 04:15:51,769 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table 2023-07-14 04:15:51,769 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 04:15:51,770 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:51,771 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740 2023-07-14 04:15:51,773 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740 2023-07-14 04:15:51,776 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 04:15:51,779 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 04:15:51,781 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9507752960, jitterRate=-0.11452150344848633}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 04:15:51,781 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 04:15:51,791 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689308151683 2023-07-14 04:15:51,814 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 04:15:51,815 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 04:15:51,816 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34763,1689308149192, state=OPEN 2023-07-14 04:15:51,818 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 04:15:51,818 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:15:51,822 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-14 04:15:51,822 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34763,1689308149192 in 310 msec 2023-07-14 04:15:51,827 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-14 04:15:51,827 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 606 msec 2023-07-14 04:15:51,832 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.2070 sec 2023-07-14 04:15:51,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689308151832, completionTime=-1 2023-07-14 04:15:51,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-14 04:15:51,833 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-14 04:15:51,886 DEBUG [hconnection-0x6391c436-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:15:51,889 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33588, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:15:51,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-14 04:15:51,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689308211908 2023-07-14 04:15:51,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689308271908 2023-07-14 04:15:51,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 75 msec 2023-07-14 04:15:51,942 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34797,1689308146653-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,942 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34797,1689308146653-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,942 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34797,1689308146653-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34797, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:51,951 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-14 04:15:51,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-14 04:15:51,962 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 04:15:51,972 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-14 04:15:51,974 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:15:51,976 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:15:51,998 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34797,1689308146653] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:15:52,000 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34797,1689308146653] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-14 04:15:52,000 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,003 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:15:52,004 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972 empty. 2023-07-14 04:15:52,005 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,005 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:15:52,005 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-14 04:15:52,009 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,009 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e empty. 2023-07-14 04:15:52,010 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,010 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-14 04:15:52,059 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-14 04:15:52,061 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 75377afadc385c92d6b322193a5c5a3e, NAME => 'hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:52,067 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-14 04:15:52,071 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 73c3c960f2db2f2a26d94c9444d65972, NAME => 'hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:52,087 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:52,087 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 75377afadc385c92d6b322193a5c5a3e, disabling compactions & flushes 2023-07-14 04:15:52,087 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:52,087 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:52,087 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. after waiting 0 ms 2023-07-14 04:15:52,087 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:52,087 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:52,087 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 75377afadc385c92d6b322193a5c5a3e: 2023-07-14 04:15:52,102 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:15:52,103 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:52,103 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 73c3c960f2db2f2a26d94c9444d65972, disabling compactions & flushes 2023-07-14 04:15:52,103 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:15:52,103 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:15:52,103 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. after waiting 0 ms 2023-07-14 04:15:52,103 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:15:52,103 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:15:52,103 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 73c3c960f2db2f2a26d94c9444d65972: 2023-07-14 04:15:52,107 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:15:52,119 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308152105"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308152105"}]},"ts":"1689308152105"} 2023-07-14 04:15:52,119 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308152108"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308152108"}]},"ts":"1689308152108"} 2023-07-14 04:15:52,151 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:15:52,153 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:15:52,153 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:15:52,155 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:15:52,158 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308152153"}]},"ts":"1689308152153"} 2023-07-14 04:15:52,158 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308152155"}]},"ts":"1689308152155"} 2023-07-14 04:15:52,162 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-14 04:15:52,164 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-14 04:15:52,168 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:52,168 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:52,168 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:52,168 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:15:52,168 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:52,171 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=75377afadc385c92d6b322193a5c5a3e, ASSIGN}] 2023-07-14 04:15:52,171 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:52,171 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:52,171 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:52,171 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:15:52,171 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:52,172 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=73c3c960f2db2f2a26d94c9444d65972, ASSIGN}] 2023-07-14 04:15:52,183 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=75377afadc385c92d6b322193a5c5a3e, ASSIGN 2023-07-14 04:15:52,183 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=73c3c960f2db2f2a26d94c9444d65972, ASSIGN 2023-07-14 04:15:52,187 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=73c3c960f2db2f2a26d94c9444d65972, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:15:52,187 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=75377afadc385c92d6b322193a5c5a3e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34609,1689308148721; forceNewPlan=false, retain=false 2023-07-14 04:15:52,188 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-14 04:15:52,190 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=73c3c960f2db2f2a26d94c9444d65972, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:52,190 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=75377afadc385c92d6b322193a5c5a3e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:52,190 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308152190"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308152190"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308152190"}]},"ts":"1689308152190"} 2023-07-14 04:15:52,191 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308152190"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308152190"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308152190"}]},"ts":"1689308152190"} 2023-07-14 04:15:52,199 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure 73c3c960f2db2f2a26d94c9444d65972, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:52,204 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure 75377afadc385c92d6b322193a5c5a3e, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:52,360 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:52,360 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:15:52,361 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:15:52,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73c3c960f2db2f2a26d94c9444d65972, NAME => 'hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:15:52,362 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,363 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:52,363 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,364 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46240, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:15:52,366 INFO [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,369 DEBUG [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/info 2023-07-14 04:15:52,369 DEBUG [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/info 2023-07-14 04:15:52,370 INFO [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73c3c960f2db2f2a26d94c9444d65972 columnFamilyName info 2023-07-14 04:15:52,370 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:52,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 75377afadc385c92d6b322193a5c5a3e, NAME => 'hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:15:52,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 04:15:52,371 INFO [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] regionserver.HStore(310): Store=73c3c960f2db2f2a26d94c9444d65972/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:52,371 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. service=MultiRowMutationService 2023-07-14 04:15:52,372 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-14 04:15:52,372 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,372 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:52,372 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,372 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,373 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,374 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,375 INFO [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,377 DEBUG [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m 2023-07-14 04:15:52,377 DEBUG [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m 2023-07-14 04:15:52,378 INFO [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 75377afadc385c92d6b322193a5c5a3e columnFamilyName m 2023-07-14 04:15:52,379 INFO [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] regionserver.HStore(310): Store=75377afadc385c92d6b322193a5c5a3e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:52,379 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:15:52,380 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,381 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,383 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:52,384 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73c3c960f2db2f2a26d94c9444d65972; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9956989280, jitterRate=-0.07268311083316803}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:52,384 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73c3c960f2db2f2a26d94c9444d65972: 2023-07-14 04:15:52,386 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:52,390 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972., pid=8, masterSystemTime=1689308152353 2023-07-14 04:15:52,390 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:52,392 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 75377afadc385c92d6b322193a5c5a3e; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4329230d, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:52,392 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 75377afadc385c92d6b322193a5c5a3e: 2023-07-14 04:15:52,397 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e., pid=9, masterSystemTime=1689308152360 2023-07-14 04:15:52,397 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:15:52,399 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:15:52,401 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=73c3c960f2db2f2a26d94c9444d65972, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:52,401 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308152400"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308152400"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308152400"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308152400"}]},"ts":"1689308152400"} 2023-07-14 04:15:52,401 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:52,402 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:52,404 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=75377afadc385c92d6b322193a5c5a3e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:52,405 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308152403"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308152403"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308152403"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308152403"}]},"ts":"1689308152403"} 2023-07-14 04:15:52,412 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-14 04:15:52,412 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure 73c3c960f2db2f2a26d94c9444d65972, server=jenkins-hbase4.apache.org,34763,1689308149192 in 206 msec 2023-07-14 04:15:52,414 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-14 04:15:52,415 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure 75377afadc385c92d6b322193a5c5a3e, server=jenkins-hbase4.apache.org,34609,1689308148721 in 207 msec 2023-07-14 04:15:52,424 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-14 04:15:52,425 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=73c3c960f2db2f2a26d94c9444d65972, ASSIGN in 240 msec 2023-07-14 04:15:52,426 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:15:52,426 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-14 04:15:52,426 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=75377afadc385c92d6b322193a5c5a3e, ASSIGN in 245 msec 2023-07-14 04:15:52,426 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308152426"}]},"ts":"1689308152426"} 2023-07-14 04:15:52,429 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:15:52,429 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308152429"}]},"ts":"1689308152429"} 2023-07-14 04:15:52,431 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-14 04:15:52,433 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-14 04:15:52,435 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:15:52,436 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:15:52,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 472 msec 2023-07-14 04:15:52,439 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 438 msec 2023-07-14 04:15:52,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-14 04:15:52,476 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:15:52,476 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:52,505 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34797,1689308146653] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:15:52,509 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46252, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:15:52,511 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-14 04:15:52,511 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-14 04:15:52,515 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-14 04:15:52,532 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:15:52,538 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 34 msec 2023-07-14 04:15:52,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-14 04:15:52,558 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:15:52,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-07-14 04:15:52,574 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-14 04:15:52,577 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:52,577 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:52,579 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-14 04:15:52,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.143sec 2023-07-14 04:15:52,581 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 04:15:52,582 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-14 04:15:52,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-14 04:15:52,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-14 04:15:52,585 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34797,1689308146653-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-14 04:15:52,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34797,1689308146653-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-14 04:15:52,587 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-14 04:15:52,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-14 04:15:52,676 DEBUG [Listener at localhost/46681] zookeeper.ReadOnlyZKClient(139): Connect 0x02b3de72 to 127.0.0.1:56534 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:15:52,682 DEBUG [Listener at localhost/46681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e64e86f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:15:52,702 DEBUG [hconnection-0x6ac6849-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:15:52,716 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33604, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:15:52,727 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:15:52,729 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:52,739 DEBUG [Listener at localhost/46681] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-14 04:15:52,743 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60972, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-14 04:15:52,761 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-14 04:15:52,761 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:15:52,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-14 04:15:52,768 DEBUG [Listener at localhost/46681] zookeeper.ReadOnlyZKClient(139): Connect 0x120ad869 to 127.0.0.1:56534 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:15:52,774 DEBUG [Listener at localhost/46681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51dbb1fe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:15:52,774 INFO [Listener at localhost/46681] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56534 2023-07-14 04:15:52,781 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:15:52,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101620b2b57000a connected 2023-07-14 04:15:52,822 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=422, OpenFileDescriptor=698, MaxFileDescriptor=60000, SystemLoadAverage=480, ProcessCount=172, AvailableMemoryMB=5150 2023-07-14 04:15:52,825 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-14 04:15:52,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:52,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:52,893 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-14 04:15:52,907 INFO [Listener at localhost/46681] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:15:52,907 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:52,907 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:52,907 INFO [Listener at localhost/46681] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:15:52,907 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:15:52,907 INFO [Listener at localhost/46681] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:15:52,907 INFO [Listener at localhost/46681] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:15:52,911 INFO [Listener at localhost/46681] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37557 2023-07-14 04:15:52,911 INFO [Listener at localhost/46681] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:15:52,912 DEBUG [Listener at localhost/46681] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:15:52,914 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:52,917 INFO [Listener at localhost/46681] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:15:52,920 INFO [Listener at localhost/46681] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37557 connecting to ZooKeeper ensemble=127.0.0.1:56534 2023-07-14 04:15:52,926 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:375570x0, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:15:52,928 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(162): regionserver:375570x0, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 04:15:52,929 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(162): regionserver:375570x0, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-14 04:15:52,930 DEBUG [Listener at localhost/46681] zookeeper.ZKUtil(164): regionserver:375570x0, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:15:52,931 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37557-0x101620b2b57000b connected 2023-07-14 04:15:52,931 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37557 2023-07-14 04:15:52,932 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37557 2023-07-14 04:15:52,932 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37557 2023-07-14 04:15:52,934 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37557 2023-07-14 04:15:52,935 DEBUG [Listener at localhost/46681] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37557 2023-07-14 04:15:52,937 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:15:52,937 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:15:52,937 INFO [Listener at localhost/46681] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:15:52,938 INFO [Listener at localhost/46681] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:15:52,938 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:15:52,938 INFO [Listener at localhost/46681] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:15:52,938 INFO [Listener at localhost/46681] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:15:52,939 INFO [Listener at localhost/46681] http.HttpServer(1146): Jetty bound to port 38173 2023-07-14 04:15:52,939 INFO [Listener at localhost/46681] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:15:52,940 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:52,941 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@593950e0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:15:52,941 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:52,941 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b0143bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:15:53,099 INFO [Listener at localhost/46681] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:15:53,100 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:15:53,101 INFO [Listener at localhost/46681] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:15:53,101 INFO [Listener at localhost/46681] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 04:15:53,102 INFO [Listener at localhost/46681] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:15:53,105 INFO [Listener at localhost/46681] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@45ea4e7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/java.io.tmpdir/jetty-0_0_0_0-38173-hbase-server-2_4_18-SNAPSHOT_jar-_-any-333389937566781129/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:15:53,107 INFO [Listener at localhost/46681] server.AbstractConnector(333): Started ServerConnector@53265acb{HTTP/1.1, (http/1.1)}{0.0.0.0:38173} 2023-07-14 04:15:53,107 INFO [Listener at localhost/46681] server.Server(415): Started @11984ms 2023-07-14 04:15:53,118 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(951): ClusterId : 48ece5fe-c3d3-403c-8aa1-91b39ba284f0 2023-07-14 04:15:53,122 DEBUG [RS:3;jenkins-hbase4:37557] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:15:53,125 DEBUG [RS:3;jenkins-hbase4:37557] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:15:53,125 DEBUG [RS:3;jenkins-hbase4:37557] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:15:53,128 DEBUG [RS:3;jenkins-hbase4:37557] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:15:53,130 DEBUG [RS:3;jenkins-hbase4:37557] zookeeper.ReadOnlyZKClient(139): Connect 0x694ffec5 to 127.0.0.1:56534 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:15:53,156 DEBUG [RS:3;jenkins-hbase4:37557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67881d9c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:15:53,156 DEBUG [RS:3;jenkins-hbase4:37557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63b795b5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:15:53,166 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:37557 2023-07-14 04:15:53,166 INFO [RS:3;jenkins-hbase4:37557] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:15:53,166 INFO [RS:3;jenkins-hbase4:37557] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:15:53,166 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:15:53,167 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34797,1689308146653 with isa=jenkins-hbase4.apache.org/172.31.14.131:37557, startcode=1689308152906 2023-07-14 04:15:53,168 DEBUG [RS:3;jenkins-hbase4:37557] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:15:53,174 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33131, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:15:53,175 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34797] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,175 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:15:53,176 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4 2023-07-14 04:15:53,176 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33983 2023-07-14 04:15:53,176 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37465 2023-07-14 04:15:53,182 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:15:53,182 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:15:53,182 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:15:53,182 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:15:53,183 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:53,184 DEBUG [RS:3;jenkins-hbase4:37557] zookeeper.ZKUtil(162): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,184 WARN [RS:3;jenkins-hbase4:37557] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:15:53,184 INFO [RS:3;jenkins-hbase4:37557] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:15:53,184 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37557,1689308152906] 2023-07-14 04:15:53,184 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,184 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:53,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:53,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:53,186 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:53,186 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:53,186 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 04:15:53,187 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:53,187 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,208 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34797,1689308146653] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-14 04:15:53,208 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:53,209 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:53,209 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:53,212 DEBUG [RS:3;jenkins-hbase4:37557] zookeeper.ZKUtil(162): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,212 DEBUG [RS:3;jenkins-hbase4:37557] zookeeper.ZKUtil(162): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:53,213 DEBUG [RS:3;jenkins-hbase4:37557] zookeeper.ZKUtil(162): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:53,213 DEBUG [RS:3;jenkins-hbase4:37557] zookeeper.ZKUtil(162): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:53,214 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:15:53,215 INFO [RS:3;jenkins-hbase4:37557] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:15:53,217 INFO [RS:3;jenkins-hbase4:37557] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:15:53,217 INFO [RS:3;jenkins-hbase4:37557] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:15:53,217 INFO [RS:3;jenkins-hbase4:37557] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:53,218 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:15:53,220 INFO [RS:3;jenkins-hbase4:37557] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,220 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,221 DEBUG [RS:3;jenkins-hbase4:37557] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:15:53,224 INFO [RS:3;jenkins-hbase4:37557] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:53,224 INFO [RS:3;jenkins-hbase4:37557] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:53,224 INFO [RS:3;jenkins-hbase4:37557] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:53,237 INFO [RS:3;jenkins-hbase4:37557] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:15:53,237 INFO [RS:3;jenkins-hbase4:37557] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37557,1689308152906-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:15:53,249 INFO [RS:3;jenkins-hbase4:37557] regionserver.Replication(203): jenkins-hbase4.apache.org,37557,1689308152906 started 2023-07-14 04:15:53,249 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37557,1689308152906, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37557, sessionid=0x101620b2b57000b 2023-07-14 04:15:53,249 DEBUG [RS:3;jenkins-hbase4:37557] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:15:53,249 DEBUG [RS:3;jenkins-hbase4:37557] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,249 DEBUG [RS:3;jenkins-hbase4:37557] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37557,1689308152906' 2023-07-14 04:15:53,249 DEBUG [RS:3;jenkins-hbase4:37557] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:15:53,249 DEBUG [RS:3;jenkins-hbase4:37557] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:15:53,250 DEBUG [RS:3;jenkins-hbase4:37557] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:15:53,250 DEBUG [RS:3;jenkins-hbase4:37557] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:15:53,250 DEBUG [RS:3;jenkins-hbase4:37557] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,250 DEBUG [RS:3;jenkins-hbase4:37557] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37557,1689308152906' 2023-07-14 04:15:53,250 DEBUG [RS:3;jenkins-hbase4:37557] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:15:53,250 DEBUG [RS:3;jenkins-hbase4:37557] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:15:53,251 DEBUG [RS:3;jenkins-hbase4:37557] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:15:53,251 INFO [RS:3;jenkins-hbase4:37557] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 04:15:53,251 INFO [RS:3;jenkins-hbase4:37557] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 04:15:53,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:15:53,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:53,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:53,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:15:53,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:15:53,274 DEBUG [hconnection-0x11ddf8cf-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:15:53,279 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33620, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:15:53,284 DEBUG [hconnection-0x11ddf8cf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:15:53,286 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46264, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:15:53,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:53,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:53,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:15:53,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:53,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:60972 deadline: 1689309353300, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:15:53,302 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:15:53,305 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:53,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:53,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:53,307 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:15:53,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:15:53,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:53,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:15:53,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:53,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:53,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:53,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:53,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:53,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:53,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:15:53,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:53,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:53,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609] to rsgroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:53,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:53,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:53,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:53,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:53,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(238): Moving server region 75377afadc385c92d6b322193a5c5a3e, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:53,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=75377afadc385c92d6b322193a5c5a3e, REOPEN/MOVE 2023-07-14 04:15:53,348 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=75377afadc385c92d6b322193a5c5a3e, REOPEN/MOVE 2023-07-14 04:15:53,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-14 04:15:53,350 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=75377afadc385c92d6b322193a5c5a3e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:53,350 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308153350"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308153350"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308153350"}]},"ts":"1689308153350"} 2023-07-14 04:15:53,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 75377afadc385c92d6b322193a5c5a3e, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:53,354 INFO [RS:3;jenkins-hbase4:37557] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37557%2C1689308152906, suffix=, logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,37557,1689308152906, archiveDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs, maxLogs=32 2023-07-14 04:15:53,389 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK] 2023-07-14 04:15:53,389 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK] 2023-07-14 04:15:53,396 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK] 2023-07-14 04:15:53,404 INFO [RS:3;jenkins-hbase4:37557] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,37557,1689308152906/jenkins-hbase4.apache.org%2C37557%2C1689308152906.1689308153356 2023-07-14 04:15:53,404 DEBUG [RS:3;jenkins-hbase4:37557] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK], DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK], DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK]] 2023-07-14 04:15:53,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:53,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 75377afadc385c92d6b322193a5c5a3e, disabling compactions & flushes 2023-07-14 04:15:53,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:53,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:53,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. after waiting 0 ms 2023-07-14 04:15:53,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:53,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 75377afadc385c92d6b322193a5c5a3e 1/1 column families, dataSize=1.38 KB heapSize=2.35 KB 2023-07-14 04:15:53,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/.tmp/m/7e3e6dbaed0f45d196172653249f81d7 2023-07-14 04:15:53,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/.tmp/m/7e3e6dbaed0f45d196172653249f81d7 as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m/7e3e6dbaed0f45d196172653249f81d7 2023-07-14 04:15:53,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m/7e3e6dbaed0f45d196172653249f81d7, entries=3, sequenceid=9, filesize=5.2 K 2023-07-14 04:15:53,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1410, heapSize ~2.34 KB/2392, currentSize=0 B/0 for 75377afadc385c92d6b322193a5c5a3e in 183ms, sequenceid=9, compaction requested=false 2023-07-14 04:15:53,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-14 04:15:53,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-14 04:15:53,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:15:53,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:53,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 75377afadc385c92d6b322193a5c5a3e: 2023-07-14 04:15:53,736 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 75377afadc385c92d6b322193a5c5a3e move to jenkins-hbase4.apache.org,37557,1689308152906 record at close sequenceid=9 2023-07-14 04:15:53,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:53,740 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=75377afadc385c92d6b322193a5c5a3e, regionState=CLOSED 2023-07-14 04:15:53,741 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308153740"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308153740"}]},"ts":"1689308153740"} 2023-07-14 04:15:53,746 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-14 04:15:53,746 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 75377afadc385c92d6b322193a5c5a3e, server=jenkins-hbase4.apache.org,34609,1689308148721 in 390 msec 2023-07-14 04:15:53,747 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=75377afadc385c92d6b322193a5c5a3e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:15:53,898 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:15:53,898 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=75377afadc385c92d6b322193a5c5a3e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:53,899 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308153898"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308153898"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308153898"}]},"ts":"1689308153898"} 2023-07-14 04:15:53,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 75377afadc385c92d6b322193a5c5a3e, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:15:54,056 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:54,057 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:15:54,060 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47584, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:15:54,065 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:54,065 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 75377afadc385c92d6b322193a5c5a3e, NAME => 'hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:15:54,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 04:15:54,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. service=MultiRowMutationService 2023-07-14 04:15:54,066 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-14 04:15:54,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:54,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:54,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:54,066 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:54,068 INFO [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:54,069 DEBUG [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m 2023-07-14 04:15:54,070 DEBUG [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m 2023-07-14 04:15:54,070 INFO [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 75377afadc385c92d6b322193a5c5a3e columnFamilyName m 2023-07-14 04:15:54,086 DEBUG [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] regionserver.HStore(539): loaded hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m/7e3e6dbaed0f45d196172653249f81d7 2023-07-14 04:15:54,087 INFO [StoreOpener-75377afadc385c92d6b322193a5c5a3e-1] regionserver.HStore(310): Store=75377afadc385c92d6b322193a5c5a3e/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:54,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:54,092 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:54,097 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:15:54,099 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 75377afadc385c92d6b322193a5c5a3e; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@42a9b319, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:54,099 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 75377afadc385c92d6b322193a5c5a3e: 2023-07-14 04:15:54,101 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e., pid=14, masterSystemTime=1689308154056 2023-07-14 04:15:54,107 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=75377afadc385c92d6b322193a5c5a3e, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:54,108 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308154107"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308154107"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308154107"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308154107"}]},"ts":"1689308154107"} 2023-07-14 04:15:54,112 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:54,113 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:15:54,118 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-14 04:15:54,119 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 75377afadc385c92d6b322193a5c5a3e, server=jenkins-hbase4.apache.org,37557,1689308152906 in 211 msec 2023-07-14 04:15:54,122 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=75377afadc385c92d6b322193a5c5a3e, REOPEN/MOVE in 773 msec 2023-07-14 04:15:54,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-14 04:15:54,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721] are moved back to default 2023-07-14 04:15:54,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:54,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:15:54,353 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34609] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:46264 deadline: 1689308214352, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=37557 startCode=1689308152906. As of locationSeqNum=9. 2023-07-14 04:15:54,460 DEBUG [hconnection-0x11ddf8cf-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:15:54,465 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47590, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:15:54,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:54,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:54,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:54,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:54,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:15:54,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:54,515 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:15:54,518 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34609] ipc.CallRunner(144): callId: 42 service: ClientService methodName: ExecService size: 617 connection: 172.31.14.131:46252 deadline: 1689308214518, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=37557 startCode=1689308152906. As of locationSeqNum=9. 2023-07-14 04:15:54,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-14 04:15:54,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 04:15:54,625 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:15:54,629 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47602, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:15:54,633 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:54,634 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:54,635 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:54,635 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:54,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 04:15:54,640 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:15:54,647 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:54,647 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:54,647 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:54,647 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:54,648 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:54,648 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 empty. 2023-07-14 04:15:54,648 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf empty. 2023-07-14 04:15:54,648 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f empty. 2023-07-14 04:15:54,649 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d empty. 2023-07-14 04:15:54,649 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa empty. 2023-07-14 04:15:54,649 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:54,649 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:54,650 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:54,650 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:54,651 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:54,651 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-14 04:15:54,689 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-14 04:15:54,691 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 4ae08bec49a5131a2adce5e080b39421, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:54,691 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3f06c5e71f6abb4c3ee0c166f85d4e6f, NAME => 'Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:54,691 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => be4fd1e007dba543a11373f4d78c0dbf, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:54,779 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:54,780 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:54,781 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 3f06c5e71f6abb4c3ee0c166f85d4e6f, disabling compactions & flushes 2023-07-14 04:15:54,781 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 4ae08bec49a5131a2adce5e080b39421, disabling compactions & flushes 2023-07-14 04:15:54,781 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:54,781 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:54,782 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:54,782 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. after waiting 0 ms 2023-07-14 04:15:54,782 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:54,782 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:54,782 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 4ae08bec49a5131a2adce5e080b39421: 2023-07-14 04:15:54,782 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => cce0aecf0b5763ffbd5c8e8db63f128d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:54,782 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:54,784 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. after waiting 0 ms 2023-07-14 04:15:54,784 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:54,784 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:54,784 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 3f06c5e71f6abb4c3ee0c166f85d4e6f: 2023-07-14 04:15:54,784 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e7c9db408e40d16e065ca42c233561aa, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:54,788 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:54,789 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing be4fd1e007dba543a11373f4d78c0dbf, disabling compactions & flushes 2023-07-14 04:15:54,789 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:54,789 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:54,789 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. after waiting 0 ms 2023-07-14 04:15:54,789 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:54,789 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:54,789 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for be4fd1e007dba543a11373f4d78c0dbf: 2023-07-14 04:15:54,823 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:54,824 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing cce0aecf0b5763ffbd5c8e8db63f128d, disabling compactions & flushes 2023-07-14 04:15:54,824 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:54,824 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:54,824 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. after waiting 0 ms 2023-07-14 04:15:54,824 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:54,825 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:54,825 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for cce0aecf0b5763ffbd5c8e8db63f128d: 2023-07-14 04:15:54,828 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:54,828 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing e7c9db408e40d16e065ca42c233561aa, disabling compactions & flushes 2023-07-14 04:15:54,828 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:54,828 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:54,828 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. after waiting 0 ms 2023-07-14 04:15:54,828 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:54,828 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:54,828 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e7c9db408e40d16e065ca42c233561aa: 2023-07-14 04:15:54,833 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:15:54,834 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308154834"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308154834"}]},"ts":"1689308154834"} 2023-07-14 04:15:54,835 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308154834"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308154834"}]},"ts":"1689308154834"} 2023-07-14 04:15:54,835 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308154834"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308154834"}]},"ts":"1689308154834"} 2023-07-14 04:15:54,835 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308154834"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308154834"}]},"ts":"1689308154834"} 2023-07-14 04:15:54,835 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308154834"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308154834"}]},"ts":"1689308154834"} 2023-07-14 04:15:54,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 04:15:54,888 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-14 04:15:54,890 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:15:54,890 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308154890"}]},"ts":"1689308154890"} 2023-07-14 04:15:54,892 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-14 04:15:54,901 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:54,901 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:54,902 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:54,902 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:54,902 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, ASSIGN}] 2023-07-14 04:15:54,908 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, ASSIGN 2023-07-14 04:15:54,908 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, ASSIGN 2023-07-14 04:15:54,909 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, ASSIGN 2023-07-14 04:15:54,909 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, ASSIGN 2023-07-14 04:15:54,911 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:15:54,911 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:15:54,911 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, ASSIGN 2023-07-14 04:15:54,911 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:15:54,911 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:15:54,915 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:15:55,061 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-14 04:15:55,065 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=3f06c5e71f6abb4c3ee0c166f85d4e6f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:55,065 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=cce0aecf0b5763ffbd5c8e8db63f128d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:55,066 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308155065"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155065"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155065"}]},"ts":"1689308155065"} 2023-07-14 04:15:55,066 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155065"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155065"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155065"}]},"ts":"1689308155065"} 2023-07-14 04:15:55,066 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=e7c9db408e40d16e065ca42c233561aa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,066 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=4ae08bec49a5131a2adce5e080b39421, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,067 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308155066"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155066"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155066"}]},"ts":"1689308155066"} 2023-07-14 04:15:55,067 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155066"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155066"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155066"}]},"ts":"1689308155066"} 2023-07-14 04:15:55,066 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=be4fd1e007dba543a11373f4d78c0dbf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,067 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155066"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155066"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155066"}]},"ts":"1689308155066"} 2023-07-14 04:15:55,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=16, state=RUNNABLE; OpenRegionProcedure 3f06c5e71f6abb4c3ee0c166f85d4e6f, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:15:55,072 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=19, state=RUNNABLE; OpenRegionProcedure cce0aecf0b5763ffbd5c8e8db63f128d, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:15:55,073 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=20, state=RUNNABLE; OpenRegionProcedure e7c9db408e40d16e065ca42c233561aa, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:55,079 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=18, state=RUNNABLE; OpenRegionProcedure 4ae08bec49a5131a2adce5e080b39421, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:55,079 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=17, state=RUNNABLE; OpenRegionProcedure be4fd1e007dba543a11373f4d78c0dbf, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:55,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 04:15:55,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:55,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:55,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be4fd1e007dba543a11373f4d78c0dbf, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-14 04:15:55,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cce0aecf0b5763ffbd5c8e8db63f128d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-14 04:15:55,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:55,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:55,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,251 INFO [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,251 INFO [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,253 DEBUG [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/f 2023-07-14 04:15:55,253 DEBUG [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/f 2023-07-14 04:15:55,253 DEBUG [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/f 2023-07-14 04:15:55,253 DEBUG [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/f 2023-07-14 04:15:55,254 INFO [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cce0aecf0b5763ffbd5c8e8db63f128d columnFamilyName f 2023-07-14 04:15:55,255 INFO [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] regionserver.HStore(310): Store=cce0aecf0b5763ffbd5c8e8db63f128d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:55,256 INFO [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be4fd1e007dba543a11373f4d78c0dbf columnFamilyName f 2023-07-14 04:15:55,258 INFO [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] regionserver.HStore(310): Store=be4fd1e007dba543a11373f4d78c0dbf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:55,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:55,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:55,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be4fd1e007dba543a11373f4d78c0dbf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9503361760, jitterRate=-0.11493046581745148}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:55,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be4fd1e007dba543a11373f4d78c0dbf: 2023-07-14 04:15:55,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cce0aecf0b5763ffbd5c8e8db63f128d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10476256960, jitterRate=-0.024322539567947388}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:55,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cce0aecf0b5763ffbd5c8e8db63f128d: 2023-07-14 04:15:55,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf., pid=25, masterSystemTime=1689308155242 2023-07-14 04:15:55,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d., pid=22, masterSystemTime=1689308155242 2023-07-14 04:15:55,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:55,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:55,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:55,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e7c9db408e40d16e065ca42c233561aa, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-14 04:15:55,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:55,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,292 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=be4fd1e007dba543a11373f4d78c0dbf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:55,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:55,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:55,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3f06c5e71f6abb4c3ee0c166f85d4e6f, NAME => 'Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-14 04:15:55,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:55,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,292 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155291"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308155291"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308155291"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308155291"}]},"ts":"1689308155291"} 2023-07-14 04:15:55,296 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=cce0aecf0b5763ffbd5c8e8db63f128d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:55,296 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155295"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308155295"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308155295"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308155295"}]},"ts":"1689308155295"} 2023-07-14 04:15:55,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=17 2023-07-14 04:15:55,310 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, ASSIGN in 404 msec 2023-07-14 04:15:55,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, ASSIGN in 404 msec 2023-07-14 04:15:55,318 INFO [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,306 INFO [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=19 2023-07-14 04:15:55,318 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=17, state=SUCCESS; OpenRegionProcedure be4fd1e007dba543a11373f4d78c0dbf, server=jenkins-hbase4.apache.org,34763,1689308149192 in 221 msec 2023-07-14 04:15:55,318 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=19, state=SUCCESS; OpenRegionProcedure cce0aecf0b5763ffbd5c8e8db63f128d, server=jenkins-hbase4.apache.org,37557,1689308152906 in 229 msec 2023-07-14 04:15:55,321 DEBUG [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/f 2023-07-14 04:15:55,321 DEBUG [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/f 2023-07-14 04:15:55,322 DEBUG [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/f 2023-07-14 04:15:55,322 DEBUG [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/f 2023-07-14 04:15:55,322 INFO [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e7c9db408e40d16e065ca42c233561aa columnFamilyName f 2023-07-14 04:15:55,323 INFO [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3f06c5e71f6abb4c3ee0c166f85d4e6f columnFamilyName f 2023-07-14 04:15:55,323 INFO [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] regionserver.HStore(310): Store=e7c9db408e40d16e065ca42c233561aa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:55,324 INFO [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] regionserver.HStore(310): Store=3f06c5e71f6abb4c3ee0c166f85d4e6f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:55,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:55,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:55,346 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e7c9db408e40d16e065ca42c233561aa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11104929120, jitterRate=0.03422711789608002}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:55,346 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3f06c5e71f6abb4c3ee0c166f85d4e6f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11482147040, jitterRate=0.06935827434062958}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:55,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e7c9db408e40d16e065ca42c233561aa: 2023-07-14 04:15:55,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3f06c5e71f6abb4c3ee0c166f85d4e6f: 2023-07-14 04:15:55,348 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa., pid=23, masterSystemTime=1689308155242 2023-07-14 04:15:55,348 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f., pid=21, masterSystemTime=1689308155242 2023-07-14 04:15:55,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:55,351 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:55,352 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=3f06c5e71f6abb4c3ee0c166f85d4e6f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:55,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:55,352 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308155352"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308155352"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308155352"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308155352"}]},"ts":"1689308155352"} 2023-07-14 04:15:55,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:55,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:55,353 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=e7c9db408e40d16e065ca42c233561aa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4ae08bec49a5131a2adce5e080b39421, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-14 04:15:55,354 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308155353"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308155353"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308155353"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308155353"}]},"ts":"1689308155353"} 2023-07-14 04:15:55,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:55,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,357 INFO [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,361 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=16 2023-07-14 04:15:55,361 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=16, state=SUCCESS; OpenRegionProcedure 3f06c5e71f6abb4c3ee0c166f85d4e6f, server=jenkins-hbase4.apache.org,37557,1689308152906 in 284 msec 2023-07-14 04:15:55,363 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, ASSIGN in 459 msec 2023-07-14 04:15:55,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=20 2023-07-14 04:15:55,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=20, state=SUCCESS; OpenRegionProcedure e7c9db408e40d16e065ca42c233561aa, server=jenkins-hbase4.apache.org,34763,1689308149192 in 286 msec 2023-07-14 04:15:55,365 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, ASSIGN in 461 msec 2023-07-14 04:15:55,368 DEBUG [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/f 2023-07-14 04:15:55,368 DEBUG [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/f 2023-07-14 04:15:55,369 INFO [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4ae08bec49a5131a2adce5e080b39421 columnFamilyName f 2023-07-14 04:15:55,370 INFO [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] regionserver.HStore(310): Store=4ae08bec49a5131a2adce5e080b39421/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:55,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:55,402 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4ae08bec49a5131a2adce5e080b39421; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11678818080, jitterRate=0.08767469227313995}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:55,402 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4ae08bec49a5131a2adce5e080b39421: 2023-07-14 04:15:55,403 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421., pid=24, masterSystemTime=1689308155242 2023-07-14 04:15:55,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:55,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:55,406 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=4ae08bec49a5131a2adce5e080b39421, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,407 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155406"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308155406"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308155406"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308155406"}]},"ts":"1689308155406"} 2023-07-14 04:15:55,414 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=18 2023-07-14 04:15:55,414 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=18, state=SUCCESS; OpenRegionProcedure 4ae08bec49a5131a2adce5e080b39421, server=jenkins-hbase4.apache.org,34763,1689308149192 in 330 msec 2023-07-14 04:15:55,418 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-07-14 04:15:55,419 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, ASSIGN in 512 msec 2023-07-14 04:15:55,421 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:15:55,421 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308155421"}]},"ts":"1689308155421"} 2023-07-14 04:15:55,423 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-14 04:15:55,428 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:15:55,430 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 918 msec 2023-07-14 04:15:55,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 04:15:55,645 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-14 04:15:55,645 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-14 04:15:55,646 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:55,653 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-14 04:15:55,654 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:55,654 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-14 04:15:55,654 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:55,660 DEBUG [Listener at localhost/46681] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:15:55,664 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35158, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:15:55,668 DEBUG [Listener at localhost/46681] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:15:55,671 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46272, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:15:55,671 DEBUG [Listener at localhost/46681] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:15:55,674 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:15:55,676 DEBUG [Listener at localhost/46681] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:15:55,681 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47614, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:15:55,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:55,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:15:55,695 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:55,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:55,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:55,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region 3f06c5e71f6abb4c3ee0c166f85d4e6f to RSGroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:55,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:55,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:55,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:15:55,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:55,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, REOPEN/MOVE 2023-07-14 04:15:55,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region be4fd1e007dba543a11373f4d78c0dbf to RSGroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,718 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, REOPEN/MOVE 2023-07-14 04:15:55,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:55,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:55,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:55,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:15:55,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:55,720 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=3f06c5e71f6abb4c3ee0c166f85d4e6f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:55,721 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308155720"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155720"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155720"}]},"ts":"1689308155720"} 2023-07-14 04:15:55,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, REOPEN/MOVE 2023-07-14 04:15:55,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region 4ae08bec49a5131a2adce5e080b39421 to RSGroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,722 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, REOPEN/MOVE 2023-07-14 04:15:55,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:55,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:55,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:55,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:15:55,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:55,725 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 3f06c5e71f6abb4c3ee0c166f85d4e6f, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:15:55,725 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=be4fd1e007dba543a11373f4d78c0dbf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,725 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155725"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155725"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155725"}]},"ts":"1689308155725"} 2023-07-14 04:15:55,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, REOPEN/MOVE 2023-07-14 04:15:55,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region cce0aecf0b5763ffbd5c8e8db63f128d to RSGroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,727 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, REOPEN/MOVE 2023-07-14 04:15:55,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:55,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:55,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:55,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:15:55,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:55,729 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=4ae08bec49a5131a2adce5e080b39421, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,730 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155729"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155729"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155729"}]},"ts":"1689308155729"} 2023-07-14 04:15:55,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, REOPEN/MOVE 2023-07-14 04:15:55,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region e7c9db408e40d16e065ca42c233561aa to RSGroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:55,731 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, REOPEN/MOVE 2023-07-14 04:15:55,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:55,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:55,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:55,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:15:55,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:55,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=27, state=RUNNABLE; CloseRegionProcedure be4fd1e007dba543a11373f4d78c0dbf, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:55,734 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure 4ae08bec49a5131a2adce5e080b39421, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:55,734 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=cce0aecf0b5763ffbd5c8e8db63f128d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:15:55,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, REOPEN/MOVE 2023-07-14 04:15:55,734 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155734"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155734"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155734"}]},"ts":"1689308155734"} 2023-07-14 04:15:55,736 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, REOPEN/MOVE 2023-07-14 04:15:55,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_84229406, current retry=0 2023-07-14 04:15:55,739 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=e7c9db408e40d16e065ca42c233561aa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:55,739 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308155739"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308155739"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308155739"}]},"ts":"1689308155739"} 2023-07-14 04:15:55,740 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure cce0aecf0b5763ffbd5c8e8db63f128d, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:15:55,742 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=31, state=RUNNABLE; CloseRegionProcedure e7c9db408e40d16e065ca42c233561aa, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:55,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,888 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3f06c5e71f6abb4c3ee0c166f85d4e6f, disabling compactions & flushes 2023-07-14 04:15:55,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:55,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:55,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. after waiting 0 ms 2023-07-14 04:15:55,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:55,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4ae08bec49a5131a2adce5e080b39421, disabling compactions & flushes 2023-07-14 04:15:55,896 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:55,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:55,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. after waiting 0 ms 2023-07-14 04:15:55,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:55,918 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:55,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:55,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:55,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4ae08bec49a5131a2adce5e080b39421: 2023-07-14 04:15:55,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4ae08bec49a5131a2adce5e080b39421 move to jenkins-hbase4.apache.org,34609,1689308148721 record at close sequenceid=2 2023-07-14 04:15:55,924 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:55,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3f06c5e71f6abb4c3ee0c166f85d4e6f: 2023-07-14 04:15:55,924 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 3f06c5e71f6abb4c3ee0c166f85d4e6f move to jenkins-hbase4.apache.org,33827,1689308148910 record at close sequenceid=2 2023-07-14 04:15:55,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:55,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,934 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=4ae08bec49a5131a2adce5e080b39421, regionState=CLOSED 2023-07-14 04:15:55,934 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155934"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308155934"}]},"ts":"1689308155934"} 2023-07-14 04:15:55,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e7c9db408e40d16e065ca42c233561aa, disabling compactions & flushes 2023-07-14 04:15:55,936 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:55,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:55,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. after waiting 0 ms 2023-07-14 04:15:55,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:55,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:55,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cce0aecf0b5763ffbd5c8e8db63f128d, disabling compactions & flushes 2023-07-14 04:15:55,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:55,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:55,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. after waiting 0 ms 2023-07-14 04:15:55,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:55,953 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=3f06c5e71f6abb4c3ee0c166f85d4e6f, regionState=CLOSED 2023-07-14 04:15:55,953 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308155953"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308155953"}]},"ts":"1689308155953"} 2023-07-14 04:15:55,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-14 04:15:55,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure 4ae08bec49a5131a2adce5e080b39421, server=jenkins-hbase4.apache.org,34763,1689308149192 in 220 msec 2023-07-14 04:15:55,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-14 04:15:55,960 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34609,1689308148721; forceNewPlan=false, retain=false 2023-07-14 04:15:55,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 3f06c5e71f6abb4c3ee0c166f85d4e6f, server=jenkins-hbase4.apache.org,37557,1689308152906 in 231 msec 2023-07-14 04:15:55,962 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33827,1689308148910; forceNewPlan=false, retain=false 2023-07-14 04:15:55,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:55,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:55,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e7c9db408e40d16e065ca42c233561aa: 2023-07-14 04:15:55,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e7c9db408e40d16e065ca42c233561aa move to jenkins-hbase4.apache.org,34609,1689308148721 record at close sequenceid=2 2023-07-14 04:15:55,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:55,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be4fd1e007dba543a11373f4d78c0dbf, disabling compactions & flushes 2023-07-14 04:15:55,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:55,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:55,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. after waiting 0 ms 2023-07-14 04:15:55,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:55,979 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=e7c9db408e40d16e065ca42c233561aa, regionState=CLOSED 2023-07-14 04:15:55,979 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308155979"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308155979"}]},"ts":"1689308155979"} 2023-07-14 04:15:55,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:55,983 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:55,983 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cce0aecf0b5763ffbd5c8e8db63f128d: 2023-07-14 04:15:55,983 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cce0aecf0b5763ffbd5c8e8db63f128d move to jenkins-hbase4.apache.org,33827,1689308148910 record at close sequenceid=2 2023-07-14 04:15:55,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:55,987 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=cce0aecf0b5763ffbd5c8e8db63f128d, regionState=CLOSED 2023-07-14 04:15:55,987 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155987"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308155987"}]},"ts":"1689308155987"} 2023-07-14 04:15:55,989 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=31 2023-07-14 04:15:55,989 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=31, state=SUCCESS; CloseRegionProcedure e7c9db408e40d16e065ca42c233561aa, server=jenkins-hbase4.apache.org,34763,1689308149192 in 242 msec 2023-07-14 04:15:55,991 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34609,1689308148721; forceNewPlan=false, retain=false 2023-07-14 04:15:55,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:55,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:55,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be4fd1e007dba543a11373f4d78c0dbf: 2023-07-14 04:15:55,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding be4fd1e007dba543a11373f4d78c0dbf move to jenkins-hbase4.apache.org,33827,1689308148910 record at close sequenceid=2 2023-07-14 04:15:55,994 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-14 04:15:55,994 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure cce0aecf0b5763ffbd5c8e8db63f128d, server=jenkins-hbase4.apache.org,37557,1689308152906 in 251 msec 2023-07-14 04:15:55,996 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33827,1689308148910; forceNewPlan=false, retain=false 2023-07-14 04:15:55,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:55,998 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=be4fd1e007dba543a11373f4d78c0dbf, regionState=CLOSED 2023-07-14 04:15:55,999 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308155998"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308155998"}]},"ts":"1689308155998"} 2023-07-14 04:15:56,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=27 2023-07-14 04:15:56,008 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=27, state=SUCCESS; CloseRegionProcedure be4fd1e007dba543a11373f4d78c0dbf, server=jenkins-hbase4.apache.org,34763,1689308149192 in 271 msec 2023-07-14 04:15:56,009 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33827,1689308148910; forceNewPlan=false, retain=false 2023-07-14 04:15:56,111 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-14 04:15:56,111 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=4ae08bec49a5131a2adce5e080b39421, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:56,111 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=3f06c5e71f6abb4c3ee0c166f85d4e6f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,111 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=be4fd1e007dba543a11373f4d78c0dbf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,111 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=e7c9db408e40d16e065ca42c233561aa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:56,111 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=cce0aecf0b5763ffbd5c8e8db63f128d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,112 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156111"}]},"ts":"1689308156111"} 2023-07-14 04:15:56,112 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156111"}]},"ts":"1689308156111"} 2023-07-14 04:15:56,112 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308156111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156111"}]},"ts":"1689308156111"} 2023-07-14 04:15:56,112 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156111"}]},"ts":"1689308156111"} 2023-07-14 04:15:56,112 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308156111"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156111"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156111"}]},"ts":"1689308156111"} 2023-07-14 04:15:56,115 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=27, state=RUNNABLE; OpenRegionProcedure be4fd1e007dba543a11373f4d78c0dbf, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:56,117 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=30, state=RUNNABLE; OpenRegionProcedure cce0aecf0b5763ffbd5c8e8db63f128d, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:56,119 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=26, state=RUNNABLE; OpenRegionProcedure 3f06c5e71f6abb4c3ee0c166f85d4e6f, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:56,121 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=28, state=RUNNABLE; OpenRegionProcedure 4ae08bec49a5131a2adce5e080b39421, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:56,124 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=31, state=RUNNABLE; OpenRegionProcedure e7c9db408e40d16e065ca42c233561aa, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:56,268 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,268 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:15:56,273 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35172, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:15:56,278 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:56,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be4fd1e007dba543a11373f4d78c0dbf, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-14 04:15:56,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:56,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:56,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:56,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:56,280 INFO [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:56,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:56,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e7c9db408e40d16e065ca42c233561aa, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-14 04:15:56,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:56,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:56,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:56,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:56,282 DEBUG [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/f 2023-07-14 04:15:56,282 DEBUG [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/f 2023-07-14 04:15:56,284 INFO [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be4fd1e007dba543a11373f4d78c0dbf columnFamilyName f 2023-07-14 04:15:56,285 INFO [StoreOpener-be4fd1e007dba543a11373f4d78c0dbf-1] regionserver.HStore(310): Store=be4fd1e007dba543a11373f4d78c0dbf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:56,287 INFO [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:56,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:56,291 DEBUG [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/f 2023-07-14 04:15:56,291 DEBUG [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/f 2023-07-14 04:15:56,292 INFO [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e7c9db408e40d16e065ca42c233561aa columnFamilyName f 2023-07-14 04:15:56,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:56,293 INFO [StoreOpener-e7c9db408e40d16e065ca42c233561aa-1] regionserver.HStore(310): Store=e7c9db408e40d16e065ca42c233561aa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:56,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:56,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:56,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:56,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be4fd1e007dba543a11373f4d78c0dbf; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11759893920, jitterRate=0.09522546827793121}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:56,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be4fd1e007dba543a11373f4d78c0dbf: 2023-07-14 04:15:56,304 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf., pid=36, masterSystemTime=1689308156268 2023-07-14 04:15:56,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:56,311 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e7c9db408e40d16e065ca42c233561aa; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10657934720, jitterRate=-0.007402479648590088}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:56,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e7c9db408e40d16e065ca42c233561aa: 2023-07-14 04:15:56,312 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa., pid=40, masterSystemTime=1689308156276 2023-07-14 04:15:56,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:56,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:56,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:56,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cce0aecf0b5763ffbd5c8e8db63f128d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-14 04:15:56,321 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=be4fd1e007dba543a11373f4d78c0dbf, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,321 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156320"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308156320"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308156320"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308156320"}]},"ts":"1689308156320"} 2023-07-14 04:15:56,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:56,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:56,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:56,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:56,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:56,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:56,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:56,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4ae08bec49a5131a2adce5e080b39421, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-14 04:15:56,323 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=e7c9db408e40d16e065ca42c233561aa, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:56,323 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308156322"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308156322"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308156322"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308156322"}]},"ts":"1689308156322"} 2023-07-14 04:15:56,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:56,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:56,324 INFO [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:56,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:56,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:56,326 DEBUG [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/f 2023-07-14 04:15:56,326 DEBUG [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/f 2023-07-14 04:15:56,327 INFO [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cce0aecf0b5763ffbd5c8e8db63f128d columnFamilyName f 2023-07-14 04:15:56,328 INFO [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:56,328 INFO [StoreOpener-cce0aecf0b5763ffbd5c8e8db63f128d-1] regionserver.HStore(310): Store=cce0aecf0b5763ffbd5c8e8db63f128d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:56,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=27 2023-07-14 04:15:56,328 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=27, state=SUCCESS; OpenRegionProcedure be4fd1e007dba543a11373f4d78c0dbf, server=jenkins-hbase4.apache.org,33827,1689308148910 in 210 msec 2023-07-14 04:15:56,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:56,330 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=31 2023-07-14 04:15:56,331 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=31, state=SUCCESS; OpenRegionProcedure e7c9db408e40d16e065ca42c233561aa, server=jenkins-hbase4.apache.org,34609,1689308148721 in 202 msec 2023-07-14 04:15:56,331 DEBUG [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/f 2023-07-14 04:15:56,331 DEBUG [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/f 2023-07-14 04:15:56,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:56,332 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, REOPEN/MOVE in 609 msec 2023-07-14 04:15:56,332 INFO [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4ae08bec49a5131a2adce5e080b39421 columnFamilyName f 2023-07-14 04:15:56,333 INFO [StoreOpener-4ae08bec49a5131a2adce5e080b39421-1] regionserver.HStore(310): Store=4ae08bec49a5131a2adce5e080b39421/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:56,334 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, REOPEN/MOVE in 600 msec 2023-07-14 04:15:56,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:56,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:56,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:56,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cce0aecf0b5763ffbd5c8e8db63f128d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9879830240, jitterRate=-0.0798691064119339}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:56,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cce0aecf0b5763ffbd5c8e8db63f128d: 2023-07-14 04:15:56,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d., pid=37, masterSystemTime=1689308156268 2023-07-14 04:15:56,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:56,344 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:56,344 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:56,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3f06c5e71f6abb4c3ee0c166f85d4e6f, NAME => 'Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-14 04:15:56,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:56,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:56,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:56,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:56,347 INFO [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:56,348 DEBUG [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/f 2023-07-14 04:15:56,348 DEBUG [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/f 2023-07-14 04:15:56,349 INFO [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3f06c5e71f6abb4c3ee0c166f85d4e6f columnFamilyName f 2023-07-14 04:15:56,350 INFO [StoreOpener-3f06c5e71f6abb4c3ee0c166f85d4e6f-1] regionserver.HStore(310): Store=3f06c5e71f6abb4c3ee0c166f85d4e6f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:56,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:56,352 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=cce0aecf0b5763ffbd5c8e8db63f128d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,352 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156352"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308156352"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308156352"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308156352"}]},"ts":"1689308156352"} 2023-07-14 04:15:56,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:56,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:56,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4ae08bec49a5131a2adce5e080b39421; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11296496320, jitterRate=0.05206820368766785}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:56,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4ae08bec49a5131a2adce5e080b39421: 2023-07-14 04:15:56,358 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421., pid=39, masterSystemTime=1689308156276 2023-07-14 04:15:56,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:56,361 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:56,362 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=30 2023-07-14 04:15:56,362 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=30, state=SUCCESS; OpenRegionProcedure cce0aecf0b5763ffbd5c8e8db63f128d, server=jenkins-hbase4.apache.org,33827,1689308148910 in 238 msec 2023-07-14 04:15:56,363 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=4ae08bec49a5131a2adce5e080b39421, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:56,363 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156363"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308156363"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308156363"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308156363"}]},"ts":"1689308156363"} 2023-07-14 04:15:56,366 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, REOPEN/MOVE in 634 msec 2023-07-14 04:15:56,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:56,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3f06c5e71f6abb4c3ee0c166f85d4e6f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10907734880, jitterRate=0.01586197316646576}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:56,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3f06c5e71f6abb4c3ee0c166f85d4e6f: 2023-07-14 04:15:56,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f., pid=38, masterSystemTime=1689308156268 2023-07-14 04:15:56,385 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=28 2023-07-14 04:15:56,385 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=28, state=SUCCESS; OpenRegionProcedure 4ae08bec49a5131a2adce5e080b39421, server=jenkins-hbase4.apache.org,34609,1689308148721 in 256 msec 2023-07-14 04:15:56,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:56,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:56,387 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=3f06c5e71f6abb4c3ee0c166f85d4e6f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,387 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308156387"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308156387"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308156387"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308156387"}]},"ts":"1689308156387"} 2023-07-14 04:15:56,389 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, REOPEN/MOVE in 662 msec 2023-07-14 04:15:56,403 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=26 2023-07-14 04:15:56,403 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=26, state=SUCCESS; OpenRegionProcedure 3f06c5e71f6abb4c3ee0c166f85d4e6f, server=jenkins-hbase4.apache.org,33827,1689308148910 in 271 msec 2023-07-14 04:15:56,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, REOPEN/MOVE in 687 msec 2023-07-14 04:15:56,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-14 04:15:56,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_84229406. 2023-07-14 04:15:56,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:15:56,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:56,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:56,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:56,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:15:56,750 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:56,758 INFO [Listener at localhost/46681] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:56,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:56,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:56,792 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308156792"}]},"ts":"1689308156792"} 2023-07-14 04:15:56,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-14 04:15:56,794 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-14 04:15:56,796 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-14 04:15:56,802 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, UNASSIGN}] 2023-07-14 04:15:56,810 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, UNASSIGN 2023-07-14 04:15:56,811 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, UNASSIGN 2023-07-14 04:15:56,811 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, UNASSIGN 2023-07-14 04:15:56,811 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, UNASSIGN 2023-07-14 04:15:56,812 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, UNASSIGN 2023-07-14 04:15:56,814 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=be4fd1e007dba543a11373f4d78c0dbf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,814 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=4ae08bec49a5131a2adce5e080b39421, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:56,814 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=3f06c5e71f6abb4c3ee0c166f85d4e6f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,815 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156814"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156814"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156814"}]},"ts":"1689308156814"} 2023-07-14 04:15:56,815 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308156814"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156814"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156814"}]},"ts":"1689308156814"} 2023-07-14 04:15:56,815 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156814"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156814"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156814"}]},"ts":"1689308156814"} 2023-07-14 04:15:56,815 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=cce0aecf0b5763ffbd5c8e8db63f128d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:56,815 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308156815"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156815"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156815"}]},"ts":"1689308156815"} 2023-07-14 04:15:56,816 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=e7c9db408e40d16e065ca42c233561aa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:56,816 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308156816"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308156816"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308156816"}]},"ts":"1689308156816"} 2023-07-14 04:15:56,823 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=44, state=RUNNABLE; CloseRegionProcedure 4ae08bec49a5131a2adce5e080b39421, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:56,825 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=42, state=RUNNABLE; CloseRegionProcedure 3f06c5e71f6abb4c3ee0c166f85d4e6f, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:56,832 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=43, state=RUNNABLE; CloseRegionProcedure be4fd1e007dba543a11373f4d78c0dbf, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:56,835 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=45, state=RUNNABLE; CloseRegionProcedure cce0aecf0b5763ffbd5c8e8db63f128d, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:56,837 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure e7c9db408e40d16e065ca42c233561aa, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:56,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-14 04:15:56,982 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:56,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4ae08bec49a5131a2adce5e080b39421, disabling compactions & flushes 2023-07-14 04:15:56,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:56,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:56,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. after waiting 0 ms 2023-07-14 04:15:56,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:56,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:56,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3f06c5e71f6abb4c3ee0c166f85d4e6f, disabling compactions & flushes 2023-07-14 04:15:56,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:56,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:56,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. after waiting 0 ms 2023-07-14 04:15:56,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:57,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:15:57,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:15:57,044 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421. 2023-07-14 04:15:57,044 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4ae08bec49a5131a2adce5e080b39421: 2023-07-14 04:15:57,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f. 2023-07-14 04:15:57,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3f06c5e71f6abb4c3ee0c166f85d4e6f: 2023-07-14 04:15:57,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:57,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:57,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e7c9db408e40d16e065ca42c233561aa, disabling compactions & flushes 2023-07-14 04:15:57,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:57,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:57,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. after waiting 0 ms 2023-07-14 04:15:57,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:57,068 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=4ae08bec49a5131a2adce5e080b39421, regionState=CLOSED 2023-07-14 04:15:57,068 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308157068"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157068"}]},"ts":"1689308157068"} 2023-07-14 04:15:57,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:57,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:57,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cce0aecf0b5763ffbd5c8e8db63f128d, disabling compactions & flushes 2023-07-14 04:15:57,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:57,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:57,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. after waiting 0 ms 2023-07-14 04:15:57,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:57,074 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=3f06c5e71f6abb4c3ee0c166f85d4e6f, regionState=CLOSED 2023-07-14 04:15:57,074 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308157074"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157074"}]},"ts":"1689308157074"} 2023-07-14 04:15:57,081 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=44 2023-07-14 04:15:57,081 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; CloseRegionProcedure 4ae08bec49a5131a2adce5e080b39421, server=jenkins-hbase4.apache.org,34609,1689308148721 in 253 msec 2023-07-14 04:15:57,094 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=42 2023-07-14 04:15:57,094 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=42, state=SUCCESS; CloseRegionProcedure 3f06c5e71f6abb4c3ee0c166f85d4e6f, server=jenkins-hbase4.apache.org,33827,1689308148910 in 253 msec 2023-07-14 04:15:57,094 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4ae08bec49a5131a2adce5e080b39421, UNASSIGN in 279 msec 2023-07-14 04:15:57,111 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:15:57,111 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f06c5e71f6abb4c3ee0c166f85d4e6f, UNASSIGN in 292 msec 2023-07-14 04:15:57,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa. 2023-07-14 04:15:57,112 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e7c9db408e40d16e065ca42c233561aa: 2023-07-14 04:15:57,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:57,114 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=e7c9db408e40d16e065ca42c233561aa, regionState=CLOSED 2023-07-14 04:15:57,115 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308157114"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157114"}]},"ts":"1689308157114"} 2023-07-14 04:15:57,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-14 04:15:57,121 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-14 04:15:57,121 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure e7c9db408e40d16e065ca42c233561aa, server=jenkins-hbase4.apache.org,34609,1689308148721 in 280 msec 2023-07-14 04:15:57,122 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7c9db408e40d16e065ca42c233561aa, UNASSIGN in 319 msec 2023-07-14 04:15:57,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:15:57,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d. 2023-07-14 04:15:57,128 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cce0aecf0b5763ffbd5c8e8db63f128d: 2023-07-14 04:15:57,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:57,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:57,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be4fd1e007dba543a11373f4d78c0dbf, disabling compactions & flushes 2023-07-14 04:15:57,132 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:57,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:57,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. after waiting 0 ms 2023-07-14 04:15:57,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:57,134 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=cce0aecf0b5763ffbd5c8e8db63f128d, regionState=CLOSED 2023-07-14 04:15:57,134 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308157134"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157134"}]},"ts":"1689308157134"} 2023-07-14 04:15:57,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=45 2023-07-14 04:15:57,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; CloseRegionProcedure cce0aecf0b5763ffbd5c8e8db63f128d, server=jenkins-hbase4.apache.org,33827,1689308148910 in 301 msec 2023-07-14 04:15:57,143 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cce0aecf0b5763ffbd5c8e8db63f128d, UNASSIGN in 338 msec 2023-07-14 04:15:57,156 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:15:57,158 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf. 2023-07-14 04:15:57,158 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be4fd1e007dba543a11373f4d78c0dbf: 2023-07-14 04:15:57,160 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:57,161 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=be4fd1e007dba543a11373f4d78c0dbf, regionState=CLOSED 2023-07-14 04:15:57,161 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308157161"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157161"}]},"ts":"1689308157161"} 2023-07-14 04:15:57,165 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=43 2023-07-14 04:15:57,165 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=43, state=SUCCESS; CloseRegionProcedure be4fd1e007dba543a11373f4d78c0dbf, server=jenkins-hbase4.apache.org,33827,1689308148910 in 331 msec 2023-07-14 04:15:57,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=41 2023-07-14 04:15:57,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=be4fd1e007dba543a11373f4d78c0dbf, UNASSIGN in 363 msec 2023-07-14 04:15:57,168 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308157168"}]},"ts":"1689308157168"} 2023-07-14 04:15:57,170 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-14 04:15:57,173 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-14 04:15:57,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 408 msec 2023-07-14 04:15:57,348 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 04:15:57,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-14 04:15:57,421 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-14 04:15:57,423 INFO [Listener at localhost/46681] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:57,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:57,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-14 04:15:57,491 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 04:15:57,495 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-14 04:15:57,547 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:57,547 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:57,547 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:57,547 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:57,548 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:57,561 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/recovered.edits] 2023-07-14 04:15:57,561 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/recovered.edits] 2023-07-14 04:15:57,562 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/recovered.edits] 2023-07-14 04:15:57,563 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/recovered.edits] 2023-07-14 04:15:57,570 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/recovered.edits] 2023-07-14 04:15:57,579 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 04:15:57,580 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-14 04:15:57,580 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:15:57,580 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-14 04:15:57,580 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 04:15:57,580 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-14 04:15:57,582 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-14 04:15:57,583 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-14 04:15:57,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 04:15:57,596 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/recovered.edits/7.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa/recovered.edits/7.seqid 2023-07-14 04:15:57,596 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/recovered.edits/7.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d/recovered.edits/7.seqid 2023-07-14 04:15:57,598 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7c9db408e40d16e065ca42c233561aa 2023-07-14 04:15:57,601 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/recovered.edits/7.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f/recovered.edits/7.seqid 2023-07-14 04:15:57,602 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f06c5e71f6abb4c3ee0c166f85d4e6f 2023-07-14 04:15:57,604 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/recovered.edits/7.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf/recovered.edits/7.seqid 2023-07-14 04:15:57,604 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/recovered.edits/7.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421/recovered.edits/7.seqid 2023-07-14 04:15:57,605 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cce0aecf0b5763ffbd5c8e8db63f128d 2023-07-14 04:15:57,605 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/be4fd1e007dba543a11373f4d78c0dbf 2023-07-14 04:15:57,605 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4ae08bec49a5131a2adce5e080b39421 2023-07-14 04:15:57,605 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-14 04:15:57,643 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-14 04:15:57,648 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-14 04:15:57,649 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-14 04:15:57,649 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308157649"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:57,649 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308157649"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:57,649 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308157649"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:57,649 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308157649"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:57,650 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308157649"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:57,653 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-14 04:15:57,654 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3f06c5e71f6abb4c3ee0c166f85d4e6f, NAME => 'Group_testTableMoveTruncateAndDrop,,1689308154507.3f06c5e71f6abb4c3ee0c166f85d4e6f.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => be4fd1e007dba543a11373f4d78c0dbf, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689308154507.be4fd1e007dba543a11373f4d78c0dbf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 4ae08bec49a5131a2adce5e080b39421, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308154507.4ae08bec49a5131a2adce5e080b39421.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => cce0aecf0b5763ffbd5c8e8db63f128d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308154507.cce0aecf0b5763ffbd5c8e8db63f128d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => e7c9db408e40d16e065ca42c233561aa, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689308154507.e7c9db408e40d16e065ca42c233561aa.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-14 04:15:57,654 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-14 04:15:57,654 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689308157654"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:57,657 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-14 04:15:57,669 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:57,670 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:57,670 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:57,670 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:57,670 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:57,670 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821 empty. 2023-07-14 04:15:57,671 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7 empty. 2023-07-14 04:15:57,671 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd empty. 2023-07-14 04:15:57,671 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e empty. 2023-07-14 04:15:57,672 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:57,672 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:57,672 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb empty. 2023-07-14 04:15:57,672 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:57,672 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:57,672 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:57,672 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-14 04:15:57,709 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-14 04:15:57,711 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => fc8568639fe059af4acb05ef8df2b2fd, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:57,712 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 49fab15a3bb168b7387d0b37e3af97a7, NAME => 'Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:57,712 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => e2a92bca3e76eac7f3126dbabb39a20e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:57,782 WARN [DataStreamer for file /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/.regioninfo] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-07-14 04:15:57,783 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:57,783 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 49fab15a3bb168b7387d0b37e3af97a7, disabling compactions & flushes 2023-07-14 04:15:57,784 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:57,784 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:57,784 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. after waiting 0 ms 2023-07-14 04:15:57,784 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:57,784 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:57,784 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 49fab15a3bb168b7387d0b37e3af97a7: 2023-07-14 04:15:57,784 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => db42f4bbb91d9858e3df8a0a11fe9821, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:57,786 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:57,786 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing e2a92bca3e76eac7f3126dbabb39a20e, disabling compactions & flushes 2023-07-14 04:15:57,787 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. after waiting 0 ms 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:57,787 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for e2a92bca3e76eac7f3126dbabb39a20e: 2023-07-14 04:15:57,787 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0acc065b219ca97a92d14400e79fceeb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing fc8568639fe059af4acb05ef8df2b2fd, disabling compactions & flushes 2023-07-14 04:15:57,787 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. after waiting 0 ms 2023-07-14 04:15:57,787 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:57,787 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:57,788 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for fc8568639fe059af4acb05ef8df2b2fd: 2023-07-14 04:15:57,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 04:15:57,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:57,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:57,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing db42f4bbb91d9858e3df8a0a11fe9821, disabling compactions & flushes 2023-07-14 04:15:57,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 0acc065b219ca97a92d14400e79fceeb, disabling compactions & flushes 2023-07-14 04:15:57,822 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:57,822 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:57,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:57,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:57,823 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. after waiting 0 ms 2023-07-14 04:15:57,822 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. after waiting 0 ms 2023-07-14 04:15:57,823 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:57,823 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:57,823 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:57,823 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:57,823 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 0acc065b219ca97a92d14400e79fceeb: 2023-07-14 04:15:57,823 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for db42f4bbb91d9858e3df8a0a11fe9821: 2023-07-14 04:15:57,828 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308157827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157827"}]},"ts":"1689308157827"} 2023-07-14 04:15:57,828 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308157827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157827"}]},"ts":"1689308157827"} 2023-07-14 04:15:57,828 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308157827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157827"}]},"ts":"1689308157827"} 2023-07-14 04:15:57,828 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308157827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157827"}]},"ts":"1689308157827"} 2023-07-14 04:15:57,828 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308157827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308157827"}]},"ts":"1689308157827"} 2023-07-14 04:15:57,834 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-14 04:15:57,835 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308157835"}]},"ts":"1689308157835"} 2023-07-14 04:15:57,837 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-14 04:15:57,842 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:15:57,842 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:15:57,842 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:15:57,842 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:15:57,845 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49fab15a3bb168b7387d0b37e3af97a7, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2a92bca3e76eac7f3126dbabb39a20e, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc8568639fe059af4acb05ef8df2b2fd, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db42f4bbb91d9858e3df8a0a11fe9821, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0acc065b219ca97a92d14400e79fceeb, ASSIGN}] 2023-07-14 04:15:57,847 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49fab15a3bb168b7387d0b37e3af97a7, ASSIGN 2023-07-14 04:15:57,847 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2a92bca3e76eac7f3126dbabb39a20e, ASSIGN 2023-07-14 04:15:57,847 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc8568639fe059af4acb05ef8df2b2fd, ASSIGN 2023-07-14 04:15:57,847 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0acc065b219ca97a92d14400e79fceeb, ASSIGN 2023-07-14 04:15:57,847 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db42f4bbb91d9858e3df8a0a11fe9821, ASSIGN 2023-07-14 04:15:57,848 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49fab15a3bb168b7387d0b37e3af97a7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34609,1689308148721; forceNewPlan=false, retain=false 2023-07-14 04:15:57,848 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2a92bca3e76eac7f3126dbabb39a20e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33827,1689308148910; forceNewPlan=false, retain=false 2023-07-14 04:15:57,848 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0acc065b219ca97a92d14400e79fceeb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33827,1689308148910; forceNewPlan=false, retain=false 2023-07-14 04:15:57,848 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc8568639fe059af4acb05ef8df2b2fd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34609,1689308148721; forceNewPlan=false, retain=false 2023-07-14 04:15:57,849 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db42f4bbb91d9858e3df8a0a11fe9821, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34609,1689308148721; forceNewPlan=false, retain=false 2023-07-14 04:15:57,998 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-14 04:15:58,002 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=db42f4bbb91d9858e3df8a0a11fe9821, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,002 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=49fab15a3bb168b7387d0b37e3af97a7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,002 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=e2a92bca3e76eac7f3126dbabb39a20e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:58,003 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308158002"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158002"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158002"}]},"ts":"1689308158002"} 2023-07-14 04:15:58,002 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=fc8568639fe059af4acb05ef8df2b2fd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,003 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158002"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158002"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158002"}]},"ts":"1689308158002"} 2023-07-14 04:15:58,003 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158002"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158002"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158002"}]},"ts":"1689308158002"} 2023-07-14 04:15:58,003 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158002"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158002"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158002"}]},"ts":"1689308158002"} 2023-07-14 04:15:58,002 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=0acc065b219ca97a92d14400e79fceeb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:58,003 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308158002"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158002"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158002"}]},"ts":"1689308158002"} 2023-07-14 04:15:58,006 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=53, state=RUNNABLE; OpenRegionProcedure 49fab15a3bb168b7387d0b37e3af97a7, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:58,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; OpenRegionProcedure e2a92bca3e76eac7f3126dbabb39a20e, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:58,009 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=55, state=RUNNABLE; OpenRegionProcedure fc8568639fe059af4acb05ef8df2b2fd, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:58,011 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=56, state=RUNNABLE; OpenRegionProcedure db42f4bbb91d9858e3df8a0a11fe9821, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:58,013 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=57, state=RUNNABLE; OpenRegionProcedure 0acc065b219ca97a92d14400e79fceeb, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:58,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 04:15:58,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:58,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fc8568639fe059af4acb05ef8df2b2fd, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-14 04:15:58,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:58,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,166 INFO [StoreOpener-fc8568639fe059af4acb05ef8df2b2fd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:58,168 DEBUG [StoreOpener-fc8568639fe059af4acb05ef8df2b2fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd/f 2023-07-14 04:15:58,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e2a92bca3e76eac7f3126dbabb39a20e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-14 04:15:58,168 DEBUG [StoreOpener-fc8568639fe059af4acb05ef8df2b2fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd/f 2023-07-14 04:15:58,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:58,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,168 INFO [StoreOpener-fc8568639fe059af4acb05ef8df2b2fd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fc8568639fe059af4acb05ef8df2b2fd columnFamilyName f 2023-07-14 04:15:58,169 INFO [StoreOpener-fc8568639fe059af4acb05ef8df2b2fd-1] regionserver.HStore(310): Store=fc8568639fe059af4acb05ef8df2b2fd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:58,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,172 INFO [StoreOpener-e2a92bca3e76eac7f3126dbabb39a20e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,174 DEBUG [StoreOpener-e2a92bca3e76eac7f3126dbabb39a20e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e/f 2023-07-14 04:15:58,174 DEBUG [StoreOpener-e2a92bca3e76eac7f3126dbabb39a20e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e/f 2023-07-14 04:15:58,174 INFO [StoreOpener-e2a92bca3e76eac7f3126dbabb39a20e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e2a92bca3e76eac7f3126dbabb39a20e columnFamilyName f 2023-07-14 04:15:58,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,177 INFO [StoreOpener-e2a92bca3e76eac7f3126dbabb39a20e-1] regionserver.HStore(310): Store=e2a92bca3e76eac7f3126dbabb39a20e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:58,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:58,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fc8568639fe059af4acb05ef8df2b2fd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9897206560, jitterRate=-0.07825081050395966}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:58,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fc8568639fe059af4acb05ef8df2b2fd: 2023-07-14 04:15:58,182 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd., pid=60, masterSystemTime=1689308158158 2023-07-14 04:15:58,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:58,185 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:58,185 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:58,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49fab15a3bb168b7387d0b37e3af97a7, NAME => 'Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-14 04:15:58,186 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=fc8568639fe059af4acb05ef8df2b2fd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:58,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,186 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158185"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308158185"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308158185"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308158185"}]},"ts":"1689308158185"} 2023-07-14 04:15:58,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-14 04:15:58,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; OpenRegionProcedure fc8568639fe059af4acb05ef8df2b2fd, server=jenkins-hbase4.apache.org,34609,1689308148721 in 180 msec 2023-07-14 04:15:58,193 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc8568639fe059af4acb05ef8df2b2fd, ASSIGN in 346 msec 2023-07-14 04:15:58,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:58,196 INFO [StoreOpener-49fab15a3bb168b7387d0b37e3af97a7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e2a92bca3e76eac7f3126dbabb39a20e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11965449760, jitterRate=0.11436934769153595}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:58,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e2a92bca3e76eac7f3126dbabb39a20e: 2023-07-14 04:15:58,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e., pid=59, masterSystemTime=1689308158162 2023-07-14 04:15:58,199 DEBUG [StoreOpener-49fab15a3bb168b7387d0b37e3af97a7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/f 2023-07-14 04:15:58,199 DEBUG [StoreOpener-49fab15a3bb168b7387d0b37e3af97a7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/f 2023-07-14 04:15:58,200 INFO [StoreOpener-49fab15a3bb168b7387d0b37e3af97a7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49fab15a3bb168b7387d0b37e3af97a7 columnFamilyName f 2023-07-14 04:15:58,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:58,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:58,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:58,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0acc065b219ca97a92d14400e79fceeb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-14 04:15:58,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:58,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,202 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=e2a92bca3e76eac7f3126dbabb39a20e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:58,203 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158202"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308158202"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308158202"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308158202"}]},"ts":"1689308158202"} 2023-07-14 04:15:58,205 INFO [StoreOpener-49fab15a3bb168b7387d0b37e3af97a7-1] regionserver.HStore(310): Store=49fab15a3bb168b7387d0b37e3af97a7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:58,207 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-14 04:15:58,208 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; OpenRegionProcedure e2a92bca3e76eac7f3126dbabb39a20e, server=jenkins-hbase4.apache.org,33827,1689308148910 in 197 msec 2023-07-14 04:15:58,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,209 INFO [StoreOpener-0acc065b219ca97a92d14400e79fceeb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,210 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2a92bca3e76eac7f3126dbabb39a20e, ASSIGN in 362 msec 2023-07-14 04:15:58,211 DEBUG [StoreOpener-0acc065b219ca97a92d14400e79fceeb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb/f 2023-07-14 04:15:58,211 DEBUG [StoreOpener-0acc065b219ca97a92d14400e79fceeb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb/f 2023-07-14 04:15:58,212 INFO [StoreOpener-0acc065b219ca97a92d14400e79fceeb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0acc065b219ca97a92d14400e79fceeb columnFamilyName f 2023-07-14 04:15:58,213 INFO [StoreOpener-0acc065b219ca97a92d14400e79fceeb-1] regionserver.HStore(310): Store=0acc065b219ca97a92d14400e79fceeb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:58,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:58,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:58,220 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49fab15a3bb168b7387d0b37e3af97a7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11503322560, jitterRate=0.07133039832115173}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:58,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49fab15a3bb168b7387d0b37e3af97a7: 2023-07-14 04:15:58,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0acc065b219ca97a92d14400e79fceeb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11642704000, jitterRate=0.08431130647659302}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:58,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0acc065b219ca97a92d14400e79fceeb: 2023-07-14 04:15:58,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7., pid=58, masterSystemTime=1689308158158 2023-07-14 04:15:58,222 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb., pid=62, masterSystemTime=1689308158162 2023-07-14 04:15:58,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:58,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:58,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:58,225 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=49fab15a3bb168b7387d0b37e3af97a7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => db42f4bbb91d9858e3df8a0a11fe9821, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-14 04:15:58,225 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308158224"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308158224"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308158224"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308158224"}]},"ts":"1689308158224"} 2023-07-14 04:15:58,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:15:58,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:58,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:58,227 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=0acc065b219ca97a92d14400e79fceeb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:58,227 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308158227"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308158227"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308158227"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308158227"}]},"ts":"1689308158227"} 2023-07-14 04:15:58,228 INFO [StoreOpener-db42f4bbb91d9858e3df8a0a11fe9821-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,230 DEBUG [StoreOpener-db42f4bbb91d9858e3df8a0a11fe9821-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821/f 2023-07-14 04:15:58,230 DEBUG [StoreOpener-db42f4bbb91d9858e3df8a0a11fe9821-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821/f 2023-07-14 04:15:58,230 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=53 2023-07-14 04:15:58,230 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=53, state=SUCCESS; OpenRegionProcedure 49fab15a3bb168b7387d0b37e3af97a7, server=jenkins-hbase4.apache.org,34609,1689308148721 in 221 msec 2023-07-14 04:15:58,231 INFO [StoreOpener-db42f4bbb91d9858e3df8a0a11fe9821-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region db42f4bbb91d9858e3df8a0a11fe9821 columnFamilyName f 2023-07-14 04:15:58,231 INFO [StoreOpener-db42f4bbb91d9858e3df8a0a11fe9821-1] regionserver.HStore(310): Store=db42f4bbb91d9858e3df8a0a11fe9821/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:15:58,232 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=57 2023-07-14 04:15:58,232 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=57, state=SUCCESS; OpenRegionProcedure 0acc065b219ca97a92d14400e79fceeb, server=jenkins-hbase4.apache.org,33827,1689308148910 in 216 msec 2023-07-14 04:15:58,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,232 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49fab15a3bb168b7387d0b37e3af97a7, ASSIGN in 387 msec 2023-07-14 04:15:58,234 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0acc065b219ca97a92d14400e79fceeb, ASSIGN in 387 msec 2023-07-14 04:15:58,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:15:58,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened db42f4bbb91d9858e3df8a0a11fe9821; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9504279360, jitterRate=-0.11484500765800476}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:15:58,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for db42f4bbb91d9858e3df8a0a11fe9821: 2023-07-14 04:15:58,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821., pid=61, masterSystemTime=1689308158158 2023-07-14 04:15:58,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:58,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:58,247 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=db42f4bbb91d9858e3df8a0a11fe9821, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,247 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158247"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308158247"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308158247"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308158247"}]},"ts":"1689308158247"} 2023-07-14 04:15:58,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=56 2023-07-14 04:15:58,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=56, state=SUCCESS; OpenRegionProcedure db42f4bbb91d9858e3df8a0a11fe9821, server=jenkins-hbase4.apache.org,34609,1689308148721 in 238 msec 2023-07-14 04:15:58,253 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=52 2023-07-14 04:15:58,253 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db42f4bbb91d9858e3df8a0a11fe9821, ASSIGN in 406 msec 2023-07-14 04:15:58,253 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308158253"}]},"ts":"1689308158253"} 2023-07-14 04:15:58,255 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-14 04:15:58,259 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-14 04:15:58,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 827 msec 2023-07-14 04:15:58,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-14 04:15:58,598 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-14 04:15:58,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:58,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:58,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:58,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:58,601 INFO [Listener at localhost/46681] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-14 04:15:58,607 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308158607"}]},"ts":"1689308158607"} 2023-07-14 04:15:58,609 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-14 04:15:58,611 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-14 04:15:58,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49fab15a3bb168b7387d0b37e3af97a7, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2a92bca3e76eac7f3126dbabb39a20e, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc8568639fe059af4acb05ef8df2b2fd, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db42f4bbb91d9858e3df8a0a11fe9821, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0acc065b219ca97a92d14400e79fceeb, UNASSIGN}] 2023-07-14 04:15:58,614 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db42f4bbb91d9858e3df8a0a11fe9821, UNASSIGN 2023-07-14 04:15:58,614 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2a92bca3e76eac7f3126dbabb39a20e, UNASSIGN 2023-07-14 04:15:58,614 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc8568639fe059af4acb05ef8df2b2fd, UNASSIGN 2023-07-14 04:15:58,614 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0acc065b219ca97a92d14400e79fceeb, UNASSIGN 2023-07-14 04:15:58,615 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49fab15a3bb168b7387d0b37e3af97a7, UNASSIGN 2023-07-14 04:15:58,615 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=db42f4bbb91d9858e3df8a0a11fe9821, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,616 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158615"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158615"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158615"}]},"ts":"1689308158615"} 2023-07-14 04:15:58,616 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=e2a92bca3e76eac7f3126dbabb39a20e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:58,616 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=fc8568639fe059af4acb05ef8df2b2fd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,616 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158616"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158616"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158616"}]},"ts":"1689308158616"} 2023-07-14 04:15:58,616 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=0acc065b219ca97a92d14400e79fceeb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:15:58,616 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158616"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158616"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158616"}]},"ts":"1689308158616"} 2023-07-14 04:15:58,616 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308158616"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158616"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158616"}]},"ts":"1689308158616"} 2023-07-14 04:15:58,617 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=49fab15a3bb168b7387d0b37e3af97a7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:15:58,617 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308158617"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308158617"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308158617"}]},"ts":"1689308158617"} 2023-07-14 04:15:58,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=67, state=RUNNABLE; CloseRegionProcedure db42f4bbb91d9858e3df8a0a11fe9821, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:58,619 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=65, state=RUNNABLE; CloseRegionProcedure e2a92bca3e76eac7f3126dbabb39a20e, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:58,621 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=66, state=RUNNABLE; CloseRegionProcedure fc8568639fe059af4acb05ef8df2b2fd, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:58,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=68, state=RUNNABLE; CloseRegionProcedure 0acc065b219ca97a92d14400e79fceeb, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:15:58,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=64, state=RUNNABLE; CloseRegionProcedure 49fab15a3bb168b7387d0b37e3af97a7, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:15:58,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-14 04:15:58,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fc8568639fe059af4acb05ef8df2b2fd, disabling compactions & flushes 2023-07-14 04:15:58,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:58,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:58,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. after waiting 0 ms 2023-07-14 04:15:58,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:58,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0acc065b219ca97a92d14400e79fceeb, disabling compactions & flushes 2023-07-14 04:15:58,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:58,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:58,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. after waiting 0 ms 2023-07-14 04:15:58,784 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:58,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:58,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd. 2023-07-14 04:15:58,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fc8568639fe059af4acb05ef8df2b2fd: 2023-07-14 04:15:58,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49fab15a3bb168b7387d0b37e3af97a7, disabling compactions & flushes 2023-07-14 04:15:58,796 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:58,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:58,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. after waiting 0 ms 2023-07-14 04:15:58,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:58,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:58,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb. 2023-07-14 04:15:58,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0acc065b219ca97a92d14400e79fceeb: 2023-07-14 04:15:58,807 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=fc8568639fe059af4acb05ef8df2b2fd, regionState=CLOSED 2023-07-14 04:15:58,807 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158807"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308158807"}]},"ts":"1689308158807"} 2023-07-14 04:15:58,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e2a92bca3e76eac7f3126dbabb39a20e, disabling compactions & flushes 2023-07-14 04:15:58,812 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:58,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:58,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. after waiting 0 ms 2023-07-14 04:15:58,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:58,814 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=0acc065b219ca97a92d14400e79fceeb, regionState=CLOSED 2023-07-14 04:15:58,814 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308158814"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308158814"}]},"ts":"1689308158814"} 2023-07-14 04:15:58,819 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=66 2023-07-14 04:15:58,819 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; CloseRegionProcedure fc8568639fe059af4acb05ef8df2b2fd, server=jenkins-hbase4.apache.org,34609,1689308148721 in 188 msec 2023-07-14 04:15:58,821 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=68 2023-07-14 04:15:58,821 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=68, state=SUCCESS; CloseRegionProcedure 0acc065b219ca97a92d14400e79fceeb, server=jenkins-hbase4.apache.org,33827,1689308148910 in 193 msec 2023-07-14 04:15:58,822 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fc8568639fe059af4acb05ef8df2b2fd, UNASSIGN in 207 msec 2023-07-14 04:15:58,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:58,824 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e. 2023-07-14 04:15:58,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e2a92bca3e76eac7f3126dbabb39a20e: 2023-07-14 04:15:58,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,826 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0acc065b219ca97a92d14400e79fceeb, UNASSIGN in 209 msec 2023-07-14 04:15:58,827 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=e2a92bca3e76eac7f3126dbabb39a20e, regionState=CLOSED 2023-07-14 04:15:58,827 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308158827"}]},"ts":"1689308158827"} 2023-07-14 04:15:58,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:58,829 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7. 2023-07-14 04:15:58,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49fab15a3bb168b7387d0b37e3af97a7: 2023-07-14 04:15:58,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing db42f4bbb91d9858e3df8a0a11fe9821, disabling compactions & flushes 2023-07-14 04:15:58,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:58,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:58,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. after waiting 0 ms 2023-07-14 04:15:58,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:58,832 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=49fab15a3bb168b7387d0b37e3af97a7, regionState=CLOSED 2023-07-14 04:15:58,833 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689308158832"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308158832"}]},"ts":"1689308158832"} 2023-07-14 04:15:58,833 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=65 2023-07-14 04:15:58,834 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=65, state=SUCCESS; CloseRegionProcedure e2a92bca3e76eac7f3126dbabb39a20e, server=jenkins-hbase4.apache.org,33827,1689308148910 in 210 msec 2023-07-14 04:15:58,836 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e2a92bca3e76eac7f3126dbabb39a20e, UNASSIGN in 222 msec 2023-07-14 04:15:58,841 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=64 2023-07-14 04:15:58,841 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=64, state=SUCCESS; CloseRegionProcedure 49fab15a3bb168b7387d0b37e3af97a7, server=jenkins-hbase4.apache.org,34609,1689308148721 in 215 msec 2023-07-14 04:15:58,843 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49fab15a3bb168b7387d0b37e3af97a7, UNASSIGN in 229 msec 2023-07-14 04:15:58,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:15:58,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821. 2023-07-14 04:15:58,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for db42f4bbb91d9858e3df8a0a11fe9821: 2023-07-14 04:15:58,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,851 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=db42f4bbb91d9858e3df8a0a11fe9821, regionState=CLOSED 2023-07-14 04:15:58,851 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689308158851"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308158851"}]},"ts":"1689308158851"} 2023-07-14 04:15:58,855 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=67 2023-07-14 04:15:58,856 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=67, state=SUCCESS; CloseRegionProcedure db42f4bbb91d9858e3df8a0a11fe9821, server=jenkins-hbase4.apache.org,34609,1689308148721 in 235 msec 2023-07-14 04:15:58,858 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=63 2023-07-14 04:15:58,858 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=db42f4bbb91d9858e3df8a0a11fe9821, UNASSIGN in 244 msec 2023-07-14 04:15:58,859 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308158858"}]},"ts":"1689308158858"} 2023-07-14 04:15:58,860 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-14 04:15:58,862 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-14 04:15:58,864 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 261 msec 2023-07-14 04:15:58,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-14 04:15:58,911 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-14 04:15:58,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,933 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_84229406' 2023-07-14 04:15:58,935 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:58,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:58,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:58,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:58,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-14 04:15:58,953 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,953 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,953 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,953 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,953 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,957 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821/recovered.edits] 2023-07-14 04:15:58,958 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/recovered.edits] 2023-07-14 04:15:58,958 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb/recovered.edits] 2023-07-14 04:15:58,958 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd/recovered.edits] 2023-07-14 04:15:58,960 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e/recovered.edits] 2023-07-14 04:15:58,974 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821/recovered.edits/4.seqid 2023-07-14 04:15:58,974 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7/recovered.edits/4.seqid 2023-07-14 04:15:58,975 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/db42f4bbb91d9858e3df8a0a11fe9821 2023-07-14 04:15:58,978 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49fab15a3bb168b7387d0b37e3af97a7 2023-07-14 04:15:58,979 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb/recovered.edits/4.seqid 2023-07-14 04:15:58,979 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd/recovered.edits/4.seqid 2023-07-14 04:15:58,980 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e/recovered.edits/4.seqid 2023-07-14 04:15:58,980 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0acc065b219ca97a92d14400e79fceeb 2023-07-14 04:15:58,981 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fc8568639fe059af4acb05ef8df2b2fd 2023-07-14 04:15:58,981 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e2a92bca3e76eac7f3126dbabb39a20e 2023-07-14 04:15:58,981 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-14 04:15:58,984 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,990 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-14 04:15:58,993 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-14 04:15:58,995 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:58,995 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-14 04:15:58,995 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308158995"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:58,995 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308158995"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:58,995 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308158995"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:58,995 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308158995"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:58,995 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308158995"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:58,998 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-14 04:15:58,998 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 49fab15a3bb168b7387d0b37e3af97a7, NAME => 'Group_testTableMoveTruncateAndDrop,,1689308157608.49fab15a3bb168b7387d0b37e3af97a7.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => e2a92bca3e76eac7f3126dbabb39a20e, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689308157608.e2a92bca3e76eac7f3126dbabb39a20e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => fc8568639fe059af4acb05ef8df2b2fd, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689308157608.fc8568639fe059af4acb05ef8df2b2fd.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => db42f4bbb91d9858e3df8a0a11fe9821, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689308157608.db42f4bbb91d9858e3df8a0a11fe9821.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 0acc065b219ca97a92d14400e79fceeb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689308157608.0acc065b219ca97a92d14400e79fceeb.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-14 04:15:58,998 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-14 04:15:58,998 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689308158998"}]},"ts":"9223372036854775807"} 2023-07-14 04:15:59,000 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-14 04:15:59,003 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-14 04:15:59,005 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 83 msec 2023-07-14 04:15:59,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-14 04:15:59,054 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-14 04:15:59,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:59,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:59,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:15:59,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:15:59,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:15:59,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609] to rsgroup default 2023-07-14 04:15:59,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:59,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:59,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_84229406, current retry=0 2023-07-14 04:15:59,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721] are moved back to Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:59,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_84229406 => default 2023-07-14 04:15:59,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:15:59,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_84229406 2023-07-14 04:15:59,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 04:15:59,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:15:59,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:15:59,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:15:59,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:15:59,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:15:59,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:15:59,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:15:59,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:15:59,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:15:59,113 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:15:59,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:15:59,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:15:59,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:15:59,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:15:59,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:59,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309359135, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:15:59,136 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:15:59,138 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:59,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,139 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:15:59,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:15:59,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:59,173 INFO [Listener at localhost/46681] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=494 (was 422) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-98080073_17 at /127.0.0.1:55578 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1827064815_17 at /127.0.0.1:35288 [Receiving block BP-112108073-172.31.14.131-1689308143026:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-112108073-172.31.14.131-1689308143026:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-112108073-172.31.14.131-1689308143026:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:37557-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1918969580-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4-prefix:jenkins-hbase4.apache.org,37557,1689308152906 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:33983 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-112108073-172.31.14.131-1689308143026:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1918969580-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_193583134_17 at /127.0.0.1:34894 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37557Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56534@0x694ffec5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56534@0x694ffec5-SendThread(127.0.0.1:56534) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1918969580-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1827064815_17 at /127.0.0.1:55546 [Receiving block BP-112108073-172.31.14.131-1689308143026:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-519b7a7c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1918969580-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1918969580-635-acceptor-0@1f9bd000-ServerConnector@53265acb{HTTP/1.1, (http/1.1)}{0.0.0.0:38173} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:33983 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:3;jenkins-hbase4:37557 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1827064815_17 at /127.0.0.1:34982 [Receiving block BP-112108073-172.31.14.131-1689308143026:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37557 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56534@0x694ffec5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=784 (was 698) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=505 (was 480) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=4530 (was 5150) 2023-07-14 04:15:59,191 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=494, OpenFileDescriptor=784, MaxFileDescriptor=60000, SystemLoadAverage=505, ProcessCount=172, AvailableMemoryMB=4529 2023-07-14 04:15:59,192 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-14 04:15:59,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:15:59,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:15:59,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:15:59,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:15:59,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:15:59,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:15:59,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:15:59,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:15:59,213 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:15:59,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:15:59,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:15:59,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:15:59,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:15:59,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:59,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309359227, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:15:59,228 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:15:59,230 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:59,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,232 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:15:59,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:15:59,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:59,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-14 04:15:59,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:59,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:60972 deadline: 1689309359233, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-14 04:15:59,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-14 04:15:59,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:59,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:60972 deadline: 1689309359235, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-14 04:15:59,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-14 04:15:59,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:59,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:60972 deadline: 1689309359236, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-14 04:15:59,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-14 04:15:59,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-14 04:15:59,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:59,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:15:59,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:15:59,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:15:59,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:15:59,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:15:59,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:15:59,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-14 04:15:59,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 04:15:59,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:15:59,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:15:59,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:15:59,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:15:59,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:15:59,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:15:59,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:15:59,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:15:59,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:15:59,274 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:15:59,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:15:59,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:15:59,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:15:59,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:15:59,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:59,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309359290, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:15:59,291 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:15:59,293 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:59,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,294 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:15:59,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:15:59,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:59,311 INFO [Listener at localhost/46681] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=497 (was 494) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=784 (was 784), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=505 (was 505), ProcessCount=172 (was 172), AvailableMemoryMB=4524 (was 4529) 2023-07-14 04:15:59,329 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=497, OpenFileDescriptor=784, MaxFileDescriptor=60000, SystemLoadAverage=505, ProcessCount=172, AvailableMemoryMB=4523 2023-07-14 04:15:59,329 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-14 04:15:59,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:15:59,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:15:59,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:15:59,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:15:59,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:15:59,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:15:59,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:15:59,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:15:59,344 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:15:59,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:15:59,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:15:59,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:15:59,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:15:59,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:15:59,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309359356, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:15:59,357 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:15:59,358 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:15:59,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,360 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:15:59,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:15:59,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:59,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:15:59,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:15:59,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-14 04:15:59,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 04:15:59,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:59,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:15:59,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:15:59,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:15:59,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763] to rsgroup bar 2023-07-14 04:15:59,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:15:59,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 04:15:59,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:15:59,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:15:59,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(238): Moving server region 73c3c960f2db2f2a26d94c9444d65972, which do not belong to RSGroup bar 2023-07-14 04:15:59,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=73c3c960f2db2f2a26d94c9444d65972, REOPEN/MOVE 2023-07-14 04:15:59,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-14 04:15:59,383 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=73c3c960f2db2f2a26d94c9444d65972, REOPEN/MOVE 2023-07-14 04:15:59,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-14 04:15:59,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-14 04:15:59,384 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-14 04:15:59,384 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=73c3c960f2db2f2a26d94c9444d65972, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:59,385 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308159384"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308159384"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308159384"}]},"ts":"1689308159384"} 2023-07-14 04:15:59,385 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34763,1689308149192, state=CLOSING 2023-07-14 04:15:59,387 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 73c3c960f2db2f2a26d94c9444d65972, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:59,387 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 04:15:59,387 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:15:59,387 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:15:59,389 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 73c3c960f2db2f2a26d94c9444d65972, server=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:15:59,546 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-14 04:15:59,547 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 04:15:59,547 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 04:15:59,547 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 04:15:59,547 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 04:15:59,547 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 04:15:59,548 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=41.95 KB heapSize=64.95 KB 2023-07-14 04:15:59,606 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.89 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/info/47442925d2ca4d30895529d01f2e24fb 2023-07-14 04:15:59,616 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47442925d2ca4d30895529d01f2e24fb 2023-07-14 04:15:59,633 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/rep_barrier/38d101f5dea940ebb1a20889b6490a92 2023-07-14 04:15:59,643 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 38d101f5dea940ebb1a20889b6490a92 2023-07-14 04:15:59,660 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/table/d83aa05207d74b33b1597a912fd268f7 2023-07-14 04:15:59,667 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d83aa05207d74b33b1597a912fd268f7 2023-07-14 04:15:59,668 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/info/47442925d2ca4d30895529d01f2e24fb as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info/47442925d2ca4d30895529d01f2e24fb 2023-07-14 04:15:59,675 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47442925d2ca4d30895529d01f2e24fb 2023-07-14 04:15:59,676 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info/47442925d2ca4d30895529d01f2e24fb, entries=46, sequenceid=95, filesize=10.2 K 2023-07-14 04:15:59,678 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/rep_barrier/38d101f5dea940ebb1a20889b6490a92 as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier/38d101f5dea940ebb1a20889b6490a92 2023-07-14 04:15:59,684 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 38d101f5dea940ebb1a20889b6490a92 2023-07-14 04:15:59,684 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier/38d101f5dea940ebb1a20889b6490a92, entries=10, sequenceid=95, filesize=6.1 K 2023-07-14 04:15:59,685 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/table/d83aa05207d74b33b1597a912fd268f7 as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table/d83aa05207d74b33b1597a912fd268f7 2023-07-14 04:15:59,692 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d83aa05207d74b33b1597a912fd268f7 2023-07-14 04:15:59,692 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table/d83aa05207d74b33b1597a912fd268f7, entries=15, sequenceid=95, filesize=6.2 K 2023-07-14 04:15:59,693 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~41.95 KB/42961, heapSize ~64.91 KB/66464, currentSize=0 B/0 for 1588230740 in 146ms, sequenceid=95, compaction requested=false 2023-07-14 04:15:59,705 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-14 04:15:59,706 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:15:59,706 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 04:15:59,706 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 04:15:59,706 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,37557,1689308152906 record at close sequenceid=95 2023-07-14 04:15:59,708 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-14 04:15:59,709 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-14 04:15:59,711 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-14 04:15:59,711 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34763,1689308149192 in 322 msec 2023-07-14 04:15:59,711 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:15:59,862 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37557,1689308152906, state=OPENING 2023-07-14 04:15:59,865 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 04:15:59,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:15:59,868 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:16:00,024 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 04:16:00,025 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:00,027 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37557%2C1689308152906.meta, suffix=.meta, logDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,37557,1689308152906, archiveDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs, maxLogs=32 2023-07-14 04:16:00,043 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK] 2023-07-14 04:16:00,044 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK] 2023-07-14 04:16:00,043 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK] 2023-07-14 04:16:00,046 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/WALs/jenkins-hbase4.apache.org,37557,1689308152906/jenkins-hbase4.apache.org%2C37557%2C1689308152906.meta.1689308160028.meta 2023-07-14 04:16:00,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33565,DS-17a4212b-d975-4e0b-97ea-2b7781c7cf34,DISK], DatanodeInfoWithStorage[127.0.0.1:39385,DS-d5641973-e14e-4459-8879-1e0f49f3a25f,DISK], DatanodeInfoWithStorage[127.0.0.1:43633,DS-3de7d5c4-1417-4ce8-aaf3-9fb5dd0e6218,DISK]] 2023-07-14 04:16:00,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:00,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 04:16:00,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 04:16:00,047 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 04:16:00,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 04:16:00,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:00,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 04:16:00,047 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 04:16:00,049 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 04:16:00,050 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info 2023-07-14 04:16:00,050 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info 2023-07-14 04:16:00,050 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 04:16:00,058 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47442925d2ca4d30895529d01f2e24fb 2023-07-14 04:16:00,058 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info/47442925d2ca4d30895529d01f2e24fb 2023-07-14 04:16:00,058 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:00,058 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 04:16:00,059 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:16:00,059 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:16:00,060 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 04:16:00,068 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 38d101f5dea940ebb1a20889b6490a92 2023-07-14 04:16:00,068 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier/38d101f5dea940ebb1a20889b6490a92 2023-07-14 04:16:00,068 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:00,069 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 04:16:00,070 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table 2023-07-14 04:16:00,070 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table 2023-07-14 04:16:00,070 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 04:16:00,077 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d83aa05207d74b33b1597a912fd268f7 2023-07-14 04:16:00,077 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table/d83aa05207d74b33b1597a912fd268f7 2023-07-14 04:16:00,077 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:00,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740 2023-07-14 04:16:00,079 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740 2023-07-14 04:16:00,082 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 04:16:00,083 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 04:16:00,084 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=99; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10794854240, jitterRate=0.005349144339561462}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 04:16:00,084 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 04:16:00,085 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=79, masterSystemTime=1689308160020 2023-07-14 04:16:00,086 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 04:16:00,087 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 04:16:00,087 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37557,1689308152906, state=OPEN 2023-07-14 04:16:00,088 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 04:16:00,088 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:16:00,091 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-14 04:16:00,091 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37557,1689308152906 in 223 msec 2023-07-14 04:16:00,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 708 msec 2023-07-14 04:16:00,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:00,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73c3c960f2db2f2a26d94c9444d65972, disabling compactions & flushes 2023-07-14 04:16:00,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:00,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:00,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. after waiting 0 ms 2023-07-14 04:16:00,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:00,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 73c3c960f2db2f2a26d94c9444d65972 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-14 04:16:00,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-14 04:16:00,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/.tmp/info/417d90be96964f98bf3732b89f5da2aa 2023-07-14 04:16:00,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/.tmp/info/417d90be96964f98bf3732b89f5da2aa as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/info/417d90be96964f98bf3732b89f5da2aa 2023-07-14 04:16:00,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/info/417d90be96964f98bf3732b89f5da2aa, entries=2, sequenceid=6, filesize=4.8 K 2023-07-14 04:16:00,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 73c3c960f2db2f2a26d94c9444d65972 in 438ms, sequenceid=6, compaction requested=false 2023-07-14 04:16:00,687 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-14 04:16:00,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:00,688 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73c3c960f2db2f2a26d94c9444d65972: 2023-07-14 04:16:00,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 73c3c960f2db2f2a26d94c9444d65972 move to jenkins-hbase4.apache.org,37557,1689308152906 record at close sequenceid=6 2023-07-14 04:16:00,690 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:00,690 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=73c3c960f2db2f2a26d94c9444d65972, regionState=CLOSED 2023-07-14 04:16:00,691 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308160690"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308160690"}]},"ts":"1689308160690"} 2023-07-14 04:16:00,691 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34763] ipc.CallRunner(144): callId: 185 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:33588 deadline: 1689308220691, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=37557 startCode=1689308152906. As of locationSeqNum=95. 2023-07-14 04:16:00,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-14 04:16:00,797 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure 73c3c960f2db2f2a26d94c9444d65972, server=jenkins-hbase4.apache.org,34763,1689308149192 in 1.4080 sec 2023-07-14 04:16:00,797 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=73c3c960f2db2f2a26d94c9444d65972, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:16:00,948 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=73c3c960f2db2f2a26d94c9444d65972, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:00,948 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308160948"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308160948"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308160948"}]},"ts":"1689308160948"} 2023-07-14 04:16:00,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure 73c3c960f2db2f2a26d94c9444d65972, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:01,108 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:01,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73c3c960f2db2f2a26d94c9444d65972, NAME => 'hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:01,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:01,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:01,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:01,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:01,116 INFO [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:01,117 DEBUG [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/info 2023-07-14 04:16:01,117 DEBUG [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/info 2023-07-14 04:16:01,118 INFO [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73c3c960f2db2f2a26d94c9444d65972 columnFamilyName info 2023-07-14 04:16:01,127 DEBUG [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] regionserver.HStore(539): loaded hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/info/417d90be96964f98bf3732b89f5da2aa 2023-07-14 04:16:01,128 INFO [StoreOpener-73c3c960f2db2f2a26d94c9444d65972-1] regionserver.HStore(310): Store=73c3c960f2db2f2a26d94c9444d65972/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:01,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:01,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:01,142 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:01,143 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73c3c960f2db2f2a26d94c9444d65972; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10828962720, jitterRate=0.008525744080543518}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:01,143 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73c3c960f2db2f2a26d94c9444d65972: 2023-07-14 04:16:01,144 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972., pid=80, masterSystemTime=1689308161103 2023-07-14 04:16:01,147 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:01,147 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:01,150 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=73c3c960f2db2f2a26d94c9444d65972, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:01,150 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308161150"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308161150"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308161150"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308161150"}]},"ts":"1689308161150"} 2023-07-14 04:16:01,155 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-14 04:16:01,155 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure 73c3c960f2db2f2a26d94c9444d65972, server=jenkins-hbase4.apache.org,37557,1689308152906 in 202 msec 2023-07-14 04:16:01,156 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=73c3c960f2db2f2a26d94c9444d65972, REOPEN/MOVE in 1.7740 sec 2023-07-14 04:16:01,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721, jenkins-hbase4.apache.org,34763,1689308149192] are moved back to default 2023-07-14 04:16:01,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-14 04:16:01,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:01,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:01,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:01,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-14 04:16:01,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:01,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:01,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-14 04:16:01,397 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:01,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-14 04:16:01,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-14 04:16:01,400 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:01,401 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 04:16:01,402 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:01,403 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:01,411 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:01,413 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,414 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 empty. 2023-07-14 04:16:01,414 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,414 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-14 04:16:01,441 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:01,443 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ba0146dae40a447e28e951cb46e69b77, NAME => 'Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:01,458 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:01,458 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing ba0146dae40a447e28e951cb46e69b77, disabling compactions & flushes 2023-07-14 04:16:01,458 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:01,459 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:01,459 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. after waiting 0 ms 2023-07-14 04:16:01,459 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:01,459 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:01,459 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for ba0146dae40a447e28e951cb46e69b77: 2023-07-14 04:16:01,461 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:01,463 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308161462"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308161462"}]},"ts":"1689308161462"} 2023-07-14 04:16:01,465 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:01,467 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:01,467 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308161467"}]},"ts":"1689308161467"} 2023-07-14 04:16:01,471 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-14 04:16:01,479 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, ASSIGN}] 2023-07-14 04:16:01,484 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, ASSIGN 2023-07-14 04:16:01,485 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:16:01,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-14 04:16:01,636 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:01,636 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308161636"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308161636"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308161636"}]},"ts":"1689308161636"} 2023-07-14 04:16:01,638 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:01,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-14 04:16:01,794 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:01,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba0146dae40a447e28e951cb46e69b77, NAME => 'Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:01,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:01,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,803 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,804 DEBUG [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/f 2023-07-14 04:16:01,804 DEBUG [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/f 2023-07-14 04:16:01,805 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba0146dae40a447e28e951cb46e69b77 columnFamilyName f 2023-07-14 04:16:01,806 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] regionserver.HStore(310): Store=ba0146dae40a447e28e951cb46e69b77/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:01,806 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,810 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:01,818 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:01,819 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba0146dae40a447e28e951cb46e69b77; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10246829600, jitterRate=-0.04568962752819061}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:01,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba0146dae40a447e28e951cb46e69b77: 2023-07-14 04:16:01,820 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77., pid=83, masterSystemTime=1689308161790 2023-07-14 04:16:01,822 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:01,822 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:01,822 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:01,822 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308161822"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308161822"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308161822"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308161822"}]},"ts":"1689308161822"} 2023-07-14 04:16:01,828 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-14 04:16:01,828 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,37557,1689308152906 in 188 msec 2023-07-14 04:16:01,830 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-14 04:16:01,830 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, ASSIGN in 349 msec 2023-07-14 04:16:01,830 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:01,831 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308161830"}]},"ts":"1689308161830"} 2023-07-14 04:16:01,835 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-14 04:16:01,839 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:01,840 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 445 msec 2023-07-14 04:16:02,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-14 04:16:02,003 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-14 04:16:02,003 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-14 04:16:02,004 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:02,005 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34763] ipc.CallRunner(144): callId: 277 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:33604 deadline: 1689308222005, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=37557 startCode=1689308152906. As of locationSeqNum=95. 2023-07-14 04:16:02,117 DEBUG [hconnection-0x6ac6849-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:02,119 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44182, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:02,129 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-14 04:16:02,130 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:02,130 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-14 04:16:02,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-14 04:16:02,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:02,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 04:16:02,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:02,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:02,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-14 04:16:02,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region ba0146dae40a447e28e951cb46e69b77 to RSGroup bar 2023-07-14 04:16:02,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:02,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:02,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:02,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:02,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-14 04:16:02,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:02,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, REOPEN/MOVE 2023-07-14 04:16:02,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-14 04:16:02,141 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, REOPEN/MOVE 2023-07-14 04:16:02,142 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:02,142 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308162142"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308162142"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308162142"}]},"ts":"1689308162142"} 2023-07-14 04:16:02,147 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:02,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba0146dae40a447e28e951cb46e69b77, disabling compactions & flushes 2023-07-14 04:16:02,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:02,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:02,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. after waiting 0 ms 2023-07-14 04:16:02,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:02,309 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:02,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:02,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba0146dae40a447e28e951cb46e69b77: 2023-07-14 04:16:02,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ba0146dae40a447e28e951cb46e69b77 move to jenkins-hbase4.apache.org,34763,1689308149192 record at close sequenceid=2 2023-07-14 04:16:02,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,313 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=CLOSED 2023-07-14 04:16:02,313 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308162313"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308162313"}]},"ts":"1689308162313"} 2023-07-14 04:16:02,317 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-14 04:16:02,317 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,37557,1689308152906 in 168 msec 2023-07-14 04:16:02,318 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:16:02,468 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:02,469 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:02,470 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308162469"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308162469"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308162469"}]},"ts":"1689308162469"} 2023-07-14 04:16:02,473 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:02,547 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 04:16:02,630 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:02,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba0146dae40a447e28e951cb46e69b77, NAME => 'Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:02,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:02,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,637 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,639 DEBUG [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/f 2023-07-14 04:16:02,639 DEBUG [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/f 2023-07-14 04:16:02,639 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba0146dae40a447e28e951cb46e69b77 columnFamilyName f 2023-07-14 04:16:02,640 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] regionserver.HStore(310): Store=ba0146dae40a447e28e951cb46e69b77/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:02,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:02,647 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba0146dae40a447e28e951cb46e69b77; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10238824480, jitterRate=-0.0464351624250412}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:02,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba0146dae40a447e28e951cb46e69b77: 2023-07-14 04:16:02,648 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77., pid=86, masterSystemTime=1689308162625 2023-07-14 04:16:02,650 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:02,650 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:02,651 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:02,651 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308162651"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308162651"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308162651"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308162651"}]},"ts":"1689308162651"} 2023-07-14 04:16:02,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-14 04:16:02,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,34763,1689308149192 in 181 msec 2023-07-14 04:16:02,658 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, REOPEN/MOVE in 517 msec 2023-07-14 04:16:03,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-14 04:16:03,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-14 04:16:03,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:03,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:03,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:03,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-14 04:16:03,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:03,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-14 04:16:03,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:03,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:60972 deadline: 1689309363150, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-14 04:16:03,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763] to rsgroup default 2023-07-14 04:16:03,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:03,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:60972 deadline: 1689309363152, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-14 04:16:03,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-14 04:16:03,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:03,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 04:16:03,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:03,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:03,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-14 04:16:03,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region ba0146dae40a447e28e951cb46e69b77 to RSGroup default 2023-07-14 04:16:03,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, REOPEN/MOVE 2023-07-14 04:16:03,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-14 04:16:03,164 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, REOPEN/MOVE 2023-07-14 04:16:03,165 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:03,165 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308163165"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308163165"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308163165"}]},"ts":"1689308163165"} 2023-07-14 04:16:03,169 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:03,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba0146dae40a447e28e951cb46e69b77, disabling compactions & flushes 2023-07-14 04:16:03,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:03,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:03,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. after waiting 0 ms 2023-07-14 04:16:03,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:03,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:16:03,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:03,332 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba0146dae40a447e28e951cb46e69b77: 2023-07-14 04:16:03,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ba0146dae40a447e28e951cb46e69b77 move to jenkins-hbase4.apache.org,37557,1689308152906 record at close sequenceid=5 2023-07-14 04:16:03,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,335 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=CLOSED 2023-07-14 04:16:03,335 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308163335"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308163335"}]},"ts":"1689308163335"} 2023-07-14 04:16:03,339 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-14 04:16:03,339 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,34763,1689308149192 in 171 msec 2023-07-14 04:16:03,339 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:16:03,490 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:03,490 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308163490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308163490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308163490"}]},"ts":"1689308163490"} 2023-07-14 04:16:03,492 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:03,648 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:03,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba0146dae40a447e28e951cb46e69b77, NAME => 'Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:03,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:03,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,651 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,652 DEBUG [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/f 2023-07-14 04:16:03,652 DEBUG [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/f 2023-07-14 04:16:03,653 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba0146dae40a447e28e951cb46e69b77 columnFamilyName f 2023-07-14 04:16:03,653 INFO [StoreOpener-ba0146dae40a447e28e951cb46e69b77-1] regionserver.HStore(310): Store=ba0146dae40a447e28e951cb46e69b77/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:03,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:03,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba0146dae40a447e28e951cb46e69b77; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10605722240, jitterRate=-0.012265145778656006}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:03,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba0146dae40a447e28e951cb46e69b77: 2023-07-14 04:16:03,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77., pid=89, masterSystemTime=1689308163644 2023-07-14 04:16:03,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:03,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:03,663 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:03,663 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308163663"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308163663"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308163663"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308163663"}]},"ts":"1689308163663"} 2023-07-14 04:16:03,666 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-14 04:16:03,666 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,37557,1689308152906 in 173 msec 2023-07-14 04:16:03,668 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, REOPEN/MOVE in 504 msec 2023-07-14 04:16:04,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-14 04:16:04,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-14 04:16:04,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:04,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-14 04:16:04,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:04,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:60972 deadline: 1689309364171, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-14 04:16:04,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763] to rsgroup default 2023-07-14 04:16:04,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-14 04:16:04,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:04,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:04,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-14 04:16:04,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721, jenkins-hbase4.apache.org,34763,1689308149192] are moved back to bar 2023-07-14 04:16:04,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-14 04:16:04,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:04,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-14 04:16:04,187 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34763] ipc.CallRunner(144): callId: 213 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:33588 deadline: 1689308224186, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=37557 startCode=1689308152906. As of locationSeqNum=6. 2023-07-14 04:16:04,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:04,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 04:16:04,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:04,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,321 INFO [Listener at localhost/46681] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-14 04:16:04,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-14 04:16:04,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-14 04:16:04,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-14 04:16:04,326 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308164326"}]},"ts":"1689308164326"} 2023-07-14 04:16:04,327 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-14 04:16:04,329 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-14 04:16:04,330 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, UNASSIGN}] 2023-07-14 04:16:04,332 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, UNASSIGN 2023-07-14 04:16:04,333 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:04,334 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308164333"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308164333"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308164333"}]},"ts":"1689308164333"} 2023-07-14 04:16:04,335 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:04,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-14 04:16:04,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:04,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba0146dae40a447e28e951cb46e69b77, disabling compactions & flushes 2023-07-14 04:16:04,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:04,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:04,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. after waiting 0 ms 2023-07-14 04:16:04,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:04,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-14 04:16:04,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77. 2023-07-14 04:16:04,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba0146dae40a447e28e951cb46e69b77: 2023-07-14 04:16:04,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:04,498 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=ba0146dae40a447e28e951cb46e69b77, regionState=CLOSED 2023-07-14 04:16:04,498 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689308164498"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308164498"}]},"ts":"1689308164498"} 2023-07-14 04:16:04,502 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-14 04:16:04,502 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure ba0146dae40a447e28e951cb46e69b77, server=jenkins-hbase4.apache.org,37557,1689308152906 in 165 msec 2023-07-14 04:16:04,504 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-14 04:16:04,504 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ba0146dae40a447e28e951cb46e69b77, UNASSIGN in 172 msec 2023-07-14 04:16:04,505 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308164505"}]},"ts":"1689308164505"} 2023-07-14 04:16:04,506 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-14 04:16:04,508 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-14 04:16:04,510 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 188 msec 2023-07-14 04:16:04,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-14 04:16:04,627 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-14 04:16:04,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-14 04:16:04,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 04:16:04,631 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 04:16:04,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-14 04:16:04,631 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 04:16:04,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:04,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:04,636 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:04,638 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/recovered.edits] 2023-07-14 04:16:04,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-14 04:16:04,644 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/recovered.edits/10.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77/recovered.edits/10.seqid 2023-07-14 04:16:04,644 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testFailRemoveGroup/ba0146dae40a447e28e951cb46e69b77 2023-07-14 04:16:04,644 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-14 04:16:04,647 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 04:16:04,649 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-14 04:16:04,651 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-14 04:16:04,652 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 04:16:04,652 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-14 04:16:04,652 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308164652"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:04,654 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 04:16:04,654 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ba0146dae40a447e28e951cb46e69b77, NAME => 'Group_testFailRemoveGroup,,1689308161394.ba0146dae40a447e28e951cb46e69b77.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 04:16:04,654 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-14 04:16:04,654 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689308164654"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:04,656 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-14 04:16:04,658 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-14 04:16:04,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 30 msec 2023-07-14 04:16:04,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-14 04:16:04,740 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-14 04:16:04,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:04,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:04,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:04,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:04,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:04,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:04,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:04,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:04,777 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:04,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:04,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:04,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:04,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:04,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:04,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:04,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309364790, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:04,791 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:04,794 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:04,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,795 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:04,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:04,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:04,814 INFO [Listener at localhost/46681] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=514 (was 497) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1827064815_17 at /127.0.0.1:55764 [Receiving block BP-112108073-172.31.14.131-1689308143026:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-720113497_17 at /127.0.0.1:43064 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-112108073-172.31.14.131-1689308143026:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1827064815_17 at /127.0.0.1:35150 [Receiving block BP-112108073-172.31.14.131-1689308143026:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-720113497_17 at /127.0.0.1:48092 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-720113497_17 at /127.0.0.1:55722 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6ac6849-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-112108073-172.31.14.131-1689308143026:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-112108073-172.31.14.131-1689308143026:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-720113497_17 at /127.0.0.1:55762 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4-prefix:jenkins-hbase4.apache.org,37557,1689308152906.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1827064815_17 at /127.0.0.1:35472 [Receiving block BP-112108073-172.31.14.131-1689308143026:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=805 (was 784) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=610 (was 505) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=4241 (was 4523) 2023-07-14 04:16:04,815 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-14 04:16:04,833 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=514, OpenFileDescriptor=805, MaxFileDescriptor=60000, SystemLoadAverage=610, ProcessCount=172, AvailableMemoryMB=4240 2023-07-14 04:16:04,833 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-14 04:16:04,833 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-14 04:16:04,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:04,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:04,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:04,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:04,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:04,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:04,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:04,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:04,853 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:04,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:04,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:04,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:04,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:04,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:04,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:04,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309364866, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:04,866 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:04,870 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:04,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,872 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:04,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:04,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:04,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:04,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:04,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_529003491 2023-07-14 04:16:04,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529003491 2023-07-14 04:16:04,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:04,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:04,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:04,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827] to rsgroup Group_testMultiTableMove_529003491 2023-07-14 04:16:04,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529003491 2023-07-14 04:16:04,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:04,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:04,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 04:16:04,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910] are moved back to default 2023-07-14 04:16:04,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_529003491 2023-07-14 04:16:04,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:04,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:04,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:04,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_529003491 2023-07-14 04:16:04,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:04,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:04,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 04:16:04,905 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:04,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-14 04:16:04,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-14 04:16:04,908 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529003491 2023-07-14 04:16:04,909 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:04,909 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:04,909 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:04,917 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:04,919 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:04,920 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 empty. 2023-07-14 04:16:04,920 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:04,920 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-14 04:16:04,969 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:04,970 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 96cc97f2298e258d70760d2b41dcaba5, NAME => 'GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:05,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-14 04:16:05,020 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:05,020 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 96cc97f2298e258d70760d2b41dcaba5, disabling compactions & flushes 2023-07-14 04:16:05,020 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:05,020 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:05,020 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. after waiting 0 ms 2023-07-14 04:16:05,020 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:05,020 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:05,020 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 96cc97f2298e258d70760d2b41dcaba5: 2023-07-14 04:16:05,023 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:05,025 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308165024"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308165024"}]},"ts":"1689308165024"} 2023-07-14 04:16:05,027 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:05,028 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:05,028 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308165028"}]},"ts":"1689308165028"} 2023-07-14 04:16:05,030 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-14 04:16:05,034 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:05,034 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:05,034 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:05,034 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:05,034 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:05,035 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, ASSIGN}] 2023-07-14 04:16:05,040 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, ASSIGN 2023-07-14 04:16:05,042 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:16:05,192 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:05,194 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=96cc97f2298e258d70760d2b41dcaba5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:05,194 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308165193"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308165193"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308165193"}]},"ts":"1689308165193"} 2023-07-14 04:16:05,206 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 96cc97f2298e258d70760d2b41dcaba5, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:05,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-14 04:16:05,215 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-14 04:16:05,216 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-14 04:16:05,362 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:05,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 96cc97f2298e258d70760d2b41dcaba5, NAME => 'GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:05,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:05,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:05,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:05,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:05,365 INFO [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:05,367 DEBUG [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/f 2023-07-14 04:16:05,367 DEBUG [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/f 2023-07-14 04:16:05,368 INFO [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 96cc97f2298e258d70760d2b41dcaba5 columnFamilyName f 2023-07-14 04:16:05,368 INFO [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] regionserver.HStore(310): Store=96cc97f2298e258d70760d2b41dcaba5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:05,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:05,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:05,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:05,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:05,376 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 96cc97f2298e258d70760d2b41dcaba5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10618109120, jitterRate=-0.011111527681350708}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:05,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 96cc97f2298e258d70760d2b41dcaba5: 2023-07-14 04:16:05,377 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5., pid=96, masterSystemTime=1689308165358 2023-07-14 04:16:05,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:05,378 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:05,379 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=96cc97f2298e258d70760d2b41dcaba5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:05,379 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308165379"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308165379"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308165379"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308165379"}]},"ts":"1689308165379"} 2023-07-14 04:16:05,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-14 04:16:05,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 96cc97f2298e258d70760d2b41dcaba5, server=jenkins-hbase4.apache.org,34763,1689308149192 in 174 msec 2023-07-14 04:16:05,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-14 04:16:05,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, ASSIGN in 348 msec 2023-07-14 04:16:05,384 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:05,385 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308165385"}]},"ts":"1689308165385"} 2023-07-14 04:16:05,386 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-14 04:16:05,389 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:05,390 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 487 msec 2023-07-14 04:16:05,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-14 04:16:05,512 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-14 04:16:05,512 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-14 04:16:05,513 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:05,517 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-14 04:16:05,517 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:05,518 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-14 04:16:05,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:05,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 04:16:05,524 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:05,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-14 04:16:05,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 04:16:05,529 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529003491 2023-07-14 04:16:05,529 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:05,531 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:05,532 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:05,535 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:05,537 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,538 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 empty. 2023-07-14 04:16:05,538 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,538 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-14 04:16:05,576 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:05,578 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7d5faec8741f408a733da99f8c5b6197, NAME => 'GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:05,599 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:05,599 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 7d5faec8741f408a733da99f8c5b6197, disabling compactions & flushes 2023-07-14 04:16:05,599 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:05,599 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:05,599 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. after waiting 0 ms 2023-07-14 04:16:05,599 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:05,600 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:05,600 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 7d5faec8741f408a733da99f8c5b6197: 2023-07-14 04:16:05,603 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:05,604 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308165604"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308165604"}]},"ts":"1689308165604"} 2023-07-14 04:16:05,609 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:05,610 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:05,610 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308165610"}]},"ts":"1689308165610"} 2023-07-14 04:16:05,616 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-14 04:16:05,620 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:05,620 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:05,620 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:05,620 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:05,621 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:05,621 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, ASSIGN}] 2023-07-14 04:16:05,623 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, ASSIGN 2023-07-14 04:16:05,624 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:16:05,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 04:16:05,775 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:05,776 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=7d5faec8741f408a733da99f8c5b6197, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:05,776 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308165776"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308165776"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308165776"}]},"ts":"1689308165776"} 2023-07-14 04:16:05,778 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 7d5faec8741f408a733da99f8c5b6197, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:05,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 04:16:05,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:05,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7d5faec8741f408a733da99f8c5b6197, NAME => 'GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:05,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:05,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,937 INFO [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,938 DEBUG [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/f 2023-07-14 04:16:05,938 DEBUG [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/f 2023-07-14 04:16:05,939 INFO [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7d5faec8741f408a733da99f8c5b6197 columnFamilyName f 2023-07-14 04:16:05,940 INFO [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] regionserver.HStore(310): Store=7d5faec8741f408a733da99f8c5b6197/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:05,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:05,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:05,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7d5faec8741f408a733da99f8c5b6197; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10057651520, jitterRate=-0.06330820918083191}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:05,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7d5faec8741f408a733da99f8c5b6197: 2023-07-14 04:16:05,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197., pid=99, masterSystemTime=1689308165930 2023-07-14 04:16:06,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:06,129 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:06,129 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=7d5faec8741f408a733da99f8c5b6197, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:06,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 04:16:06,130 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308166129"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308166129"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308166129"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308166129"}]},"ts":"1689308166129"} 2023-07-14 04:16:06,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-14 04:16:06,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 7d5faec8741f408a733da99f8c5b6197, server=jenkins-hbase4.apache.org,37557,1689308152906 in 353 msec 2023-07-14 04:16:06,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-14 04:16:06,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, ASSIGN in 512 msec 2023-07-14 04:16:06,135 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:06,135 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308166135"}]},"ts":"1689308166135"} 2023-07-14 04:16:06,137 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-14 04:16:06,139 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:06,140 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 619 msec 2023-07-14 04:16:06,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-14 04:16:06,631 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-14 04:16:06,631 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-14 04:16:06,632 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:06,636 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-14 04:16:06,637 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:06,637 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-14 04:16:06,637 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:06,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-14 04:16:06,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:06,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-14 04:16:06,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:06,650 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_529003491 2023-07-14 04:16:06,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_529003491 2023-07-14 04:16:06,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529003491 2023-07-14 04:16:06,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:06,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:06,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:06,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_529003491 2023-07-14 04:16:06,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region 7d5faec8741f408a733da99f8c5b6197 to RSGroup Group_testMultiTableMove_529003491 2023-07-14 04:16:06,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, REOPEN/MOVE 2023-07-14 04:16:06,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_529003491 2023-07-14 04:16:06,662 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, REOPEN/MOVE 2023-07-14 04:16:06,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region 96cc97f2298e258d70760d2b41dcaba5 to RSGroup Group_testMultiTableMove_529003491 2023-07-14 04:16:06,663 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=7d5faec8741f408a733da99f8c5b6197, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:06,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, REOPEN/MOVE 2023-07-14 04:16:06,663 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308166663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308166663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308166663"}]},"ts":"1689308166663"} 2023-07-14 04:16:06,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_529003491, current retry=0 2023-07-14 04:16:06,664 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, REOPEN/MOVE 2023-07-14 04:16:06,666 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 7d5faec8741f408a733da99f8c5b6197, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:06,666 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=96cc97f2298e258d70760d2b41dcaba5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:06,666 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308166666"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308166666"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308166666"}]},"ts":"1689308166666"} 2023-07-14 04:16:06,669 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure 96cc97f2298e258d70760d2b41dcaba5, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:06,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:06,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7d5faec8741f408a733da99f8c5b6197, disabling compactions & flushes 2023-07-14 04:16:06,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:06,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:06,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. after waiting 0 ms 2023-07-14 04:16:06,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:06,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:06,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 96cc97f2298e258d70760d2b41dcaba5, disabling compactions & flushes 2023-07-14 04:16:06,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:06,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:06,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. after waiting 0 ms 2023-07-14 04:16:06,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:06,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:06,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:06,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:06,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7d5faec8741f408a733da99f8c5b6197: 2023-07-14 04:16:06,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7d5faec8741f408a733da99f8c5b6197 move to jenkins-hbase4.apache.org,33827,1689308148910 record at close sequenceid=2 2023-07-14 04:16:06,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:06,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 96cc97f2298e258d70760d2b41dcaba5: 2023-07-14 04:16:06,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 96cc97f2298e258d70760d2b41dcaba5 move to jenkins-hbase4.apache.org,33827,1689308148910 record at close sequenceid=2 2023-07-14 04:16:06,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:06,835 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=7d5faec8741f408a733da99f8c5b6197, regionState=CLOSED 2023-07-14 04:16:06,835 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308166835"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308166835"}]},"ts":"1689308166835"} 2023-07-14 04:16:06,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:06,837 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=96cc97f2298e258d70760d2b41dcaba5, regionState=CLOSED 2023-07-14 04:16:06,837 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308166837"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308166837"}]},"ts":"1689308166837"} 2023-07-14 04:16:06,850 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-14 04:16:06,850 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 7d5faec8741f408a733da99f8c5b6197, server=jenkins-hbase4.apache.org,37557,1689308152906 in 171 msec 2023-07-14 04:16:06,850 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-14 04:16:06,850 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure 96cc97f2298e258d70760d2b41dcaba5, server=jenkins-hbase4.apache.org,34763,1689308149192 in 170 msec 2023-07-14 04:16:06,851 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33827,1689308148910; forceNewPlan=false, retain=false 2023-07-14 04:16:06,851 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33827,1689308148910; forceNewPlan=false, retain=false 2023-07-14 04:16:07,002 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=7d5faec8741f408a733da99f8c5b6197, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:07,002 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=96cc97f2298e258d70760d2b41dcaba5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:07,002 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308167002"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308167002"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308167002"}]},"ts":"1689308167002"} 2023-07-14 04:16:07,002 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308167002"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308167002"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308167002"}]},"ts":"1689308167002"} 2023-07-14 04:16:07,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 7d5faec8741f408a733da99f8c5b6197, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:16:07,005 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure 96cc97f2298e258d70760d2b41dcaba5, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:16:07,160 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:07,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 96cc97f2298e258d70760d2b41dcaba5, NAME => 'GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:07,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:07,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,163 INFO [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,166 DEBUG [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/f 2023-07-14 04:16:07,166 DEBUG [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/f 2023-07-14 04:16:07,166 INFO [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 96cc97f2298e258d70760d2b41dcaba5 columnFamilyName f 2023-07-14 04:16:07,167 INFO [StoreOpener-96cc97f2298e258d70760d2b41dcaba5-1] regionserver.HStore(310): Store=96cc97f2298e258d70760d2b41dcaba5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:07,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 96cc97f2298e258d70760d2b41dcaba5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10098809280, jitterRate=-0.059475094079971313}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:07,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 96cc97f2298e258d70760d2b41dcaba5: 2023-07-14 04:16:07,176 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5., pid=105, masterSystemTime=1689308167156 2023-07-14 04:16:07,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:07,179 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:07,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7d5faec8741f408a733da99f8c5b6197, NAME => 'GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:07,180 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=96cc97f2298e258d70760d2b41dcaba5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:07,180 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308167180"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308167180"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308167180"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308167180"}]},"ts":"1689308167180"} 2023-07-14 04:16:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:07,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:07,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:07,184 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-14 04:16:07,184 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure 96cc97f2298e258d70760d2b41dcaba5, server=jenkins-hbase4.apache.org,33827,1689308148910 in 177 msec 2023-07-14 04:16:07,186 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, REOPEN/MOVE in 521 msec 2023-07-14 04:16:07,187 INFO [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:07,188 DEBUG [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/f 2023-07-14 04:16:07,188 DEBUG [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/f 2023-07-14 04:16:07,189 INFO [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7d5faec8741f408a733da99f8c5b6197 columnFamilyName f 2023-07-14 04:16:07,189 INFO [StoreOpener-7d5faec8741f408a733da99f8c5b6197-1] regionserver.HStore(310): Store=7d5faec8741f408a733da99f8c5b6197/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:07,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:07,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:07,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:07,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7d5faec8741f408a733da99f8c5b6197; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12022534240, jitterRate=0.11968575417995453}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:07,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7d5faec8741f408a733da99f8c5b6197: 2023-07-14 04:16:07,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197., pid=104, masterSystemTime=1689308167156 2023-07-14 04:16:07,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:07,201 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:07,201 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=7d5faec8741f408a733da99f8c5b6197, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:07,201 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308167201"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308167201"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308167201"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308167201"}]},"ts":"1689308167201"} 2023-07-14 04:16:07,206 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-14 04:16:07,206 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 7d5faec8741f408a733da99f8c5b6197, server=jenkins-hbase4.apache.org,33827,1689308148910 in 199 msec 2023-07-14 04:16:07,215 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, REOPEN/MOVE in 546 msec 2023-07-14 04:16:07,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-14 04:16:07,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_529003491. 2023-07-14 04:16:07,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:07,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:07,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:07,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-14 04:16:07,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:07,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-14 04:16:07,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:07,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:07,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:07,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_529003491 2023-07-14 04:16:07,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:07,677 INFO [Listener at localhost/46681] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-14 04:16:07,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-14 04:16:07,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 04:16:07,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-14 04:16:07,682 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308167682"}]},"ts":"1689308167682"} 2023-07-14 04:16:07,683 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-14 04:16:07,685 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-14 04:16:07,688 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, UNASSIGN}] 2023-07-14 04:16:07,690 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, UNASSIGN 2023-07-14 04:16:07,691 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=96cc97f2298e258d70760d2b41dcaba5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:07,691 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308167691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308167691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308167691"}]},"ts":"1689308167691"} 2023-07-14 04:16:07,692 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure 96cc97f2298e258d70760d2b41dcaba5, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:16:07,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-14 04:16:07,812 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 04:16:07,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 96cc97f2298e258d70760d2b41dcaba5, disabling compactions & flushes 2023-07-14 04:16:07,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:07,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:07,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. after waiting 0 ms 2023-07-14 04:16:07,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:07,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:16:07,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5. 2023-07-14 04:16:07,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 96cc97f2298e258d70760d2b41dcaba5: 2023-07-14 04:16:07,857 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=96cc97f2298e258d70760d2b41dcaba5, regionState=CLOSED 2023-07-14 04:16:07,857 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308167857"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308167857"}]},"ts":"1689308167857"} 2023-07-14 04:16:07,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,861 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-14 04:16:07,861 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure 96cc97f2298e258d70760d2b41dcaba5, server=jenkins-hbase4.apache.org,33827,1689308148910 in 167 msec 2023-07-14 04:16:07,864 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-14 04:16:07,865 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=96cc97f2298e258d70760d2b41dcaba5, UNASSIGN in 176 msec 2023-07-14 04:16:07,865 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308167865"}]},"ts":"1689308167865"} 2023-07-14 04:16:07,871 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-14 04:16:07,874 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-14 04:16:07,876 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 198 msec 2023-07-14 04:16:07,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-14 04:16:07,985 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-14 04:16:07,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-14 04:16:07,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 04:16:07,990 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 04:16:07,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_529003491' 2023-07-14 04:16:07,991 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 04:16:07,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529003491 2023-07-14 04:16:07,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:07,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:07,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:07,996 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:07,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-14 04:16:07,998 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/recovered.edits] 2023-07-14 04:16:08,006 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/recovered.edits/7.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5/recovered.edits/7.seqid 2023-07-14 04:16:08,007 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveA/96cc97f2298e258d70760d2b41dcaba5 2023-07-14 04:16:08,007 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-14 04:16:08,010 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 04:16:08,013 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-14 04:16:08,016 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-14 04:16:08,017 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 04:16:08,017 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-14 04:16:08,018 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308168018"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:08,021 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 04:16:08,021 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 96cc97f2298e258d70760d2b41dcaba5, NAME => 'GrouptestMultiTableMoveA,,1689308164902.96cc97f2298e258d70760d2b41dcaba5.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 04:16:08,021 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-14 04:16:08,021 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689308168021"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:08,024 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-14 04:16:08,027 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-14 04:16:08,030 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 41 msec 2023-07-14 04:16:08,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-14 04:16:08,100 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-14 04:16:08,101 INFO [Listener at localhost/46681] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-14 04:16:08,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-14 04:16:08,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 04:16:08,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-14 04:16:08,113 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308168112"}]},"ts":"1689308168112"} 2023-07-14 04:16:08,114 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-14 04:16:08,116 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-14 04:16:08,117 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, UNASSIGN}] 2023-07-14 04:16:08,119 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, UNASSIGN 2023-07-14 04:16:08,123 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=7d5faec8741f408a733da99f8c5b6197, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:08,123 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308168123"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308168123"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308168123"}]},"ts":"1689308168123"} 2023-07-14 04:16:08,124 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 7d5faec8741f408a733da99f8c5b6197, server=jenkins-hbase4.apache.org,33827,1689308148910}] 2023-07-14 04:16:08,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-14 04:16:08,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:08,276 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7d5faec8741f408a733da99f8c5b6197, disabling compactions & flushes 2023-07-14 04:16:08,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:08,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:08,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. after waiting 0 ms 2023-07-14 04:16:08,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:08,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:16:08,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197. 2023-07-14 04:16:08,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7d5faec8741f408a733da99f8c5b6197: 2023-07-14 04:16:08,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:08,284 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=7d5faec8741f408a733da99f8c5b6197, regionState=CLOSED 2023-07-14 04:16:08,284 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689308168284"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308168284"}]},"ts":"1689308168284"} 2023-07-14 04:16:08,288 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-14 04:16:08,288 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 7d5faec8741f408a733da99f8c5b6197, server=jenkins-hbase4.apache.org,33827,1689308148910 in 162 msec 2023-07-14 04:16:08,291 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-14 04:16:08,291 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=7d5faec8741f408a733da99f8c5b6197, UNASSIGN in 171 msec 2023-07-14 04:16:08,296 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308168296"}]},"ts":"1689308168296"} 2023-07-14 04:16:08,298 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-14 04:16:08,301 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-14 04:16:08,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 201 msec 2023-07-14 04:16:08,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-14 04:16:08,416 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-14 04:16:08,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-14 04:16:08,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 04:16:08,419 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 04:16:08,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_529003491' 2023-07-14 04:16:08,420 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 04:16:08,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529003491 2023-07-14 04:16:08,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,425 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:08,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,428 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/recovered.edits] 2023-07-14 04:16:08,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:08,439 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/recovered.edits/7.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197/recovered.edits/7.seqid 2023-07-14 04:16:08,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-14 04:16:08,440 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/GrouptestMultiTableMoveB/7d5faec8741f408a733da99f8c5b6197 2023-07-14 04:16:08,440 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-14 04:16:08,444 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 04:16:08,447 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-14 04:16:08,449 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-14 04:16:08,451 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 04:16:08,451 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-14 04:16:08,451 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308168451"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:08,455 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 04:16:08,455 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7d5faec8741f408a733da99f8c5b6197, NAME => 'GrouptestMultiTableMoveB,,1689308165519.7d5faec8741f408a733da99f8c5b6197.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 04:16:08,456 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-14 04:16:08,456 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689308168456"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:08,457 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-14 04:16:08,460 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-14 04:16:08,463 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 43 msec 2023-07-14 04:16:08,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-14 04:16:08,541 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-14 04:16:08,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:08,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:08,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:08,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827] to rsgroup default 2023-07-14 04:16:08,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_529003491 2023-07-14 04:16:08,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:08,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_529003491, current retry=0 2023-07-14 04:16:08,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910] are moved back to Group_testMultiTableMove_529003491 2023-07-14 04:16:08,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_529003491 => default 2023-07-14 04:16:08,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_529003491 2023-07-14 04:16:08,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 04:16:08,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:08,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:08,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:08,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:08,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:08,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:08,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:08,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:08,578 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:08,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:08,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:08,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:08,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:08,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:08,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 512 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309368589, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:08,590 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:08,592 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:08,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,593 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:08,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:08,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,618 INFO [Listener at localhost/46681] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=508 (was 514), OpenFileDescriptor=787 (was 805), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=561 (was 610), ProcessCount=172 (was 172), AvailableMemoryMB=4100 (was 4240) 2023-07-14 04:16:08,618 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-14 04:16:08,640 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=508, OpenFileDescriptor=787, MaxFileDescriptor=60000, SystemLoadAverage=561, ProcessCount=172, AvailableMemoryMB=4100 2023-07-14 04:16:08,640 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-14 04:16:08,640 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-14 04:16:08,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:08,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:08,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:08,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:08,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:08,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:08,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:08,664 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:08,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:08,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:08,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:08,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:08,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:08,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] ipc.CallRunner(144): callId: 540 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309368687, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:08,687 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:08,689 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:08,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,690 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:08,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:08,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:08,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-14 04:16:08,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 04:16:08,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:08,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:08,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609] to rsgroup oldGroup 2023-07-14 04:16:08,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 04:16:08,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:08,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 04:16:08,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721] are moved back to default 2023-07-14 04:16:08,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-14 04:16:08,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-14 04:16:08,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-14 04:16:08,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:08,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,747 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-14 04:16:08,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-14 04:16:08,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 04:16:08,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:08,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:08,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34763] to rsgroup anotherRSGroup 2023-07-14 04:16:08,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-14 04:16:08,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 04:16:08,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:08,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 04:16:08,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34763,1689308149192] are moved back to default 2023-07-14 04:16:08,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-14 04:16:08,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,774 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,774 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-14 04:16:08,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-14 04:16:08,778 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,784 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-14 04:16:08,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:08,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:60972 deadline: 1689309368783, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-14 04:16:08,786 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-14 04:16:08,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:08,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:60972 deadline: 1689309368786, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-14 04:16:08,787 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-14 04:16:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:08,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:60972 deadline: 1689309368787, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-14 04:16:08,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-14 04:16:08,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:08,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 580 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:60972 deadline: 1689309368788, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-14 04:16:08,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,792 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,794 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:08,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:08,794 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:08,795 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34763] to rsgroup default 2023-07-14 04:16:08,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-14 04:16:08,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 04:16:08,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:08,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-14 04:16:08,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34763,1689308149192] are moved back to anotherRSGroup 2023-07-14 04:16:08,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-14 04:16:08,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,803 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-14 04:16:08,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 04:16:08,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-14 04:16:08,827 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:08,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:08,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:08,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:08,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609] to rsgroup default 2023-07-14 04:16:08,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-14 04:16:08,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:08,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-14 04:16:08,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721] are moved back to oldGroup 2023-07-14 04:16:08,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-14 04:16:08,834 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,835 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-14 04:16:08,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 04:16:08,840 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:08,840 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:08,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:08,840 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:08,841 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:08,841 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:08,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:08,846 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:08,849 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:08,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:08,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:08,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:08,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:08,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:08,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 616 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309368859, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:08,860 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:08,862 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:08,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,863 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:08,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:08,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,880 INFO [Listener at localhost/46681] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=511 (was 508) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=785 (was 787), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=561 (was 561), ProcessCount=172 (was 172), AvailableMemoryMB=4094 (was 4100) 2023-07-14 04:16:08,880 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-14 04:16:08,897 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=511, OpenFileDescriptor=785, MaxFileDescriptor=60000, SystemLoadAverage=561, ProcessCount=172, AvailableMemoryMB=4094 2023-07-14 04:16:08,897 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-14 04:16:08,897 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-14 04:16:08,903 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,903 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:08,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:08,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:08,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:08,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:08,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:08,910 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:08,912 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:08,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:08,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:08,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:08,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:08,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:08,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 644 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309368922, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:08,923 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:08,924 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:08,925 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,925 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,925 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:08,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:08,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,926 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:08,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,927 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-14 04:16:08,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 04:16:08,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:08,936 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:08,939 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,939 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609] to rsgroup oldgroup 2023-07-14 04:16:08,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 04:16:08,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:08,947 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 04:16:08,947 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721] are moved back to default 2023-07-14 04:16:08,947 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-14 04:16:08,947 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:08,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:08,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:08,952 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-14 04:16:08,952 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:08,953 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:08,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-14 04:16:08,957 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:08,957 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-14 04:16:08,960 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 04:16:08,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-14 04:16:08,960 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:08,961 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:08,961 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:08,964 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:08,965 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/testRename/1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:08,966 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/testRename/1533d61a3dc01181d37a5f58d846789c empty. 2023-07-14 04:16:08,967 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/testRename/1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:08,967 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-14 04:16:08,986 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:08,987 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1533d61a3dc01181d37a5f58d846789c, NAME => 'testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:09,011 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:09,011 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 1533d61a3dc01181d37a5f58d846789c, disabling compactions & flushes 2023-07-14 04:16:09,012 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,012 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,012 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. after waiting 0 ms 2023-07-14 04:16:09,012 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,012 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,012 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 1533d61a3dc01181d37a5f58d846789c: 2023-07-14 04:16:09,014 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:09,015 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308169015"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308169015"}]},"ts":"1689308169015"} 2023-07-14 04:16:09,017 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:09,018 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:09,018 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308169018"}]},"ts":"1689308169018"} 2023-07-14 04:16:09,019 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-14 04:16:09,023 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:09,023 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:09,023 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:09,023 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:09,023 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, ASSIGN}] 2023-07-14 04:16:09,025 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, ASSIGN 2023-07-14 04:16:09,026 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:16:09,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-14 04:16:09,177 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:09,178 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:09,179 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308169178"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308169178"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308169178"}]},"ts":"1689308169178"} 2023-07-14 04:16:09,181 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:09,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-14 04:16:09,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1533d61a3dc01181d37a5f58d846789c, NAME => 'testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:09,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:09,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,346 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,348 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,349 DEBUG [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/tr 2023-07-14 04:16:09,350 DEBUG [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/tr 2023-07-14 04:16:09,350 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1533d61a3dc01181d37a5f58d846789c columnFamilyName tr 2023-07-14 04:16:09,351 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] regionserver.HStore(310): Store=1533d61a3dc01181d37a5f58d846789c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:09,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:09,359 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1533d61a3dc01181d37a5f58d846789c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11239912640, jitterRate=0.04679843783378601}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:09,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1533d61a3dc01181d37a5f58d846789c: 2023-07-14 04:16:09,361 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c., pid=116, masterSystemTime=1689308169333 2023-07-14 04:16:09,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,364 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:09,364 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308169364"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308169364"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308169364"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308169364"}]},"ts":"1689308169364"} 2023-07-14 04:16:09,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-14 04:16:09,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34763,1689308149192 in 188 msec 2023-07-14 04:16:09,376 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-14 04:16:09,377 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, ASSIGN in 348 msec 2023-07-14 04:16:09,378 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:09,378 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308169378"}]},"ts":"1689308169378"} 2023-07-14 04:16:09,387 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-14 04:16:09,390 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:09,392 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 436 msec 2023-07-14 04:16:09,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-14 04:16:09,565 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-14 04:16:09,565 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-14 04:16:09,566 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:09,569 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-14 04:16:09,569 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:09,569 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-14 04:16:09,572 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-14 04:16:09,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 04:16:09,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:09,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:09,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:09,577 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-14 04:16:09,577 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region 1533d61a3dc01181d37a5f58d846789c to RSGroup oldgroup 2023-07-14 04:16:09,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:09,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:09,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:09,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:09,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:09,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, REOPEN/MOVE 2023-07-14 04:16:09,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-14 04:16:09,579 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, REOPEN/MOVE 2023-07-14 04:16:09,579 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:09,579 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308169579"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308169579"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308169579"}]},"ts":"1689308169579"} 2023-07-14 04:16:09,581 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:09,584 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-14 04:16:09,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1533d61a3dc01181d37a5f58d846789c, disabling compactions & flushes 2023-07-14 04:16:09,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. after waiting 0 ms 2023-07-14 04:16:09,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:09,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:09,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1533d61a3dc01181d37a5f58d846789c: 2023-07-14 04:16:09,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1533d61a3dc01181d37a5f58d846789c move to jenkins-hbase4.apache.org,34609,1689308148721 record at close sequenceid=2 2023-07-14 04:16:09,742 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=CLOSED 2023-07-14 04:16:09,742 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308169742"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308169742"}]},"ts":"1689308169742"} 2023-07-14 04:16:09,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:09,748 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-14 04:16:09,749 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34763,1689308149192 in 166 msec 2023-07-14 04:16:09,749 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34609,1689308148721; forceNewPlan=false, retain=false 2023-07-14 04:16:09,899 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:09,900 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:16:09,900 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308169900"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308169900"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308169900"}]},"ts":"1689308169900"} 2023-07-14 04:16:09,901 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:16:10,061 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:10,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1533d61a3dc01181d37a5f58d846789c, NAME => 'testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:10,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:10,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:10,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:10,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:10,063 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:10,063 DEBUG [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/tr 2023-07-14 04:16:10,064 DEBUG [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/tr 2023-07-14 04:16:10,064 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1533d61a3dc01181d37a5f58d846789c columnFamilyName tr 2023-07-14 04:16:10,065 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] regionserver.HStore(310): Store=1533d61a3dc01181d37a5f58d846789c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:10,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:10,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:10,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:10,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1533d61a3dc01181d37a5f58d846789c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9671802880, jitterRate=-0.0992431640625}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:10,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1533d61a3dc01181d37a5f58d846789c: 2023-07-14 04:16:10,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c., pid=119, masterSystemTime=1689308170053 2023-07-14 04:16:10,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:10,073 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:10,073 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:16:10,073 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308170073"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308170073"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308170073"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308170073"}]},"ts":"1689308170073"} 2023-07-14 04:16:10,076 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-14 04:16:10,076 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34609,1689308148721 in 173 msec 2023-07-14 04:16:10,078 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, REOPEN/MOVE in 498 msec 2023-07-14 04:16:10,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-14 04:16:10,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-14 04:16:10,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:10,582 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:10,582 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:10,584 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:10,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-14 04:16:10,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:10,586 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-14 04:16:10,586 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:10,587 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-14 04:16:10,587 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:10,587 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:10,588 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:10,588 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-14 04:16:10,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 04:16:10,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 04:16:10,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:10,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:10,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:10,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:10,597 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:10,597 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:10,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34763] to rsgroup normal 2023-07-14 04:16:10,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 04:16:10,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 04:16:10,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:10,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:10,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:10,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 04:16:10,608 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34763,1689308149192] are moved back to default 2023-07-14 04:16:10,608 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-14 04:16:10,608 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:10,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:10,610 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:10,612 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-14 04:16:10,612 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:10,613 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:10,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-14 04:16:10,616 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:10,616 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-14 04:16:10,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-14 04:16:10,617 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 04:16:10,618 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 04:16:10,618 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:10,618 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:10,619 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:10,621 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:10,622 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:10,622 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb empty. 2023-07-14 04:16:10,623 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:10,623 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-14 04:16:10,642 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:10,643 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => bcb43835975d4f00df4e228eb945f1fb, NAME => 'unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:10,659 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:10,659 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing bcb43835975d4f00df4e228eb945f1fb, disabling compactions & flushes 2023-07-14 04:16:10,659 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:10,659 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:10,659 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. after waiting 0 ms 2023-07-14 04:16:10,659 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:10,659 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:10,659 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for bcb43835975d4f00df4e228eb945f1fb: 2023-07-14 04:16:10,662 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:10,663 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308170663"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308170663"}]},"ts":"1689308170663"} 2023-07-14 04:16:10,665 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:10,665 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:10,666 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308170666"}]},"ts":"1689308170666"} 2023-07-14 04:16:10,672 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-14 04:16:10,677 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, ASSIGN}] 2023-07-14 04:16:10,679 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, ASSIGN 2023-07-14 04:16:10,683 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:16:10,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-14 04:16:10,835 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:10,835 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308170835"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308170835"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308170835"}]},"ts":"1689308170835"} 2023-07-14 04:16:10,837 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:10,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-14 04:16:10,995 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:10,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bcb43835975d4f00df4e228eb945f1fb, NAME => 'unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:10,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:10,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:10,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:10,996 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:10,997 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:10,999 DEBUG [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/ut 2023-07-14 04:16:10,999 DEBUG [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/ut 2023-07-14 04:16:10,999 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bcb43835975d4f00df4e228eb945f1fb columnFamilyName ut 2023-07-14 04:16:11,000 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] regionserver.HStore(310): Store=bcb43835975d4f00df4e228eb945f1fb/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:11,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:11,007 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bcb43835975d4f00df4e228eb945f1fb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10930425600, jitterRate=0.017975211143493652}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:11,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bcb43835975d4f00df4e228eb945f1fb: 2023-07-14 04:16:11,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb., pid=122, masterSystemTime=1689308170989 2023-07-14 04:16:11,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,010 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,010 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:11,010 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308171010"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308171010"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308171010"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308171010"}]},"ts":"1689308171010"} 2023-07-14 04:16:11,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-14 04:16:11,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,37557,1689308152906 in 175 msec 2023-07-14 04:16:11,016 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-14 04:16:11,016 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, ASSIGN in 337 msec 2023-07-14 04:16:11,016 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:11,016 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308171016"}]},"ts":"1689308171016"} 2023-07-14 04:16:11,018 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-14 04:16:11,020 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:11,021 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 407 msec 2023-07-14 04:16:11,217 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-14 04:16:11,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-14 04:16:11,220 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-14 04:16:11,221 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-14 04:16:11,221 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:11,224 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-14 04:16:11,224 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:11,224 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-14 04:16:11,226 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-14 04:16:11,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-14 04:16:11,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 04:16:11,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:11,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:11,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:11,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-14 04:16:11,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region bcb43835975d4f00df4e228eb945f1fb to RSGroup normal 2023-07-14 04:16:11,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, REOPEN/MOVE 2023-07-14 04:16:11,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-14 04:16:11,234 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, REOPEN/MOVE 2023-07-14 04:16:11,234 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:11,234 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308171234"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308171234"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308171234"}]},"ts":"1689308171234"} 2023-07-14 04:16:11,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:11,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bcb43835975d4f00df4e228eb945f1fb, disabling compactions & flushes 2023-07-14 04:16:11,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. after waiting 0 ms 2023-07-14 04:16:11,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:11,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bcb43835975d4f00df4e228eb945f1fb: 2023-07-14 04:16:11,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bcb43835975d4f00df4e228eb945f1fb move to jenkins-hbase4.apache.org,34763,1689308149192 record at close sequenceid=2 2023-07-14 04:16:11,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,405 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=CLOSED 2023-07-14 04:16:11,406 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308171405"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308171405"}]},"ts":"1689308171405"} 2023-07-14 04:16:11,408 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-14 04:16:11,408 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,37557,1689308152906 in 172 msec 2023-07-14 04:16:11,409 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:16:11,559 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:11,559 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308171559"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308171559"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308171559"}]},"ts":"1689308171559"} 2023-07-14 04:16:11,562 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:11,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bcb43835975d4f00df4e228eb945f1fb, NAME => 'unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:11,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:11,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,719 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,720 DEBUG [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/ut 2023-07-14 04:16:11,720 DEBUG [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/ut 2023-07-14 04:16:11,721 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bcb43835975d4f00df4e228eb945f1fb columnFamilyName ut 2023-07-14 04:16:11,721 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] regionserver.HStore(310): Store=bcb43835975d4f00df4e228eb945f1fb/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:11,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:11,726 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bcb43835975d4f00df4e228eb945f1fb; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10136816960, jitterRate=-0.05593535304069519}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:11,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bcb43835975d4f00df4e228eb945f1fb: 2023-07-14 04:16:11,727 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb., pid=125, masterSystemTime=1689308171713 2023-07-14 04:16:11,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:11,729 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:11,729 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308171728"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308171728"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308171728"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308171728"}]},"ts":"1689308171728"} 2023-07-14 04:16:11,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-14 04:16:11,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,34763,1689308149192 in 168 msec 2023-07-14 04:16:11,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, REOPEN/MOVE in 498 msec 2023-07-14 04:16:12,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-14 04:16:12,234 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-14 04:16:12,234 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:12,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:12,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:12,240 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:12,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-14 04:16:12,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:12,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-14 04:16:12,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:12,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-14 04:16:12,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:12,244 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-14 04:16:12,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 04:16:12,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:12,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:12,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 04:16:12,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-14 04:16:12,252 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-14 04:16:12,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:12,255 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:12,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-14 04:16:12,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:12,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-14 04:16:12,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:12,259 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-14 04:16:12,259 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:12,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:12,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:12,267 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-14 04:16:12,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 04:16:12,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:12,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:12,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 04:16:12,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:12,276 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-14 04:16:12,276 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region bcb43835975d4f00df4e228eb945f1fb to RSGroup default 2023-07-14 04:16:12,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, REOPEN/MOVE 2023-07-14 04:16:12,278 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-14 04:16:12,278 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, REOPEN/MOVE 2023-07-14 04:16:12,278 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:12,279 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308172278"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308172278"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308172278"}]},"ts":"1689308172278"} 2023-07-14 04:16:12,286 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:12,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bcb43835975d4f00df4e228eb945f1fb, disabling compactions & flushes 2023-07-14 04:16:12,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:12,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:12,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. after waiting 0 ms 2023-07-14 04:16:12,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:12,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:16:12,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:12,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bcb43835975d4f00df4e228eb945f1fb: 2023-07-14 04:16:12,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bcb43835975d4f00df4e228eb945f1fb move to jenkins-hbase4.apache.org,37557,1689308152906 record at close sequenceid=5 2023-07-14 04:16:12,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,450 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=CLOSED 2023-07-14 04:16:12,450 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308172450"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308172450"}]},"ts":"1689308172450"} 2023-07-14 04:16:12,454 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-14 04:16:12,454 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,34763,1689308149192 in 165 msec 2023-07-14 04:16:12,454 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:16:12,605 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:12,605 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308172605"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308172605"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308172605"}]},"ts":"1689308172605"} 2023-07-14 04:16:12,607 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:12,764 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:12,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bcb43835975d4f00df4e228eb945f1fb, NAME => 'unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:12,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:12,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,766 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,767 DEBUG [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/ut 2023-07-14 04:16:12,767 DEBUG [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/ut 2023-07-14 04:16:12,767 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bcb43835975d4f00df4e228eb945f1fb columnFamilyName ut 2023-07-14 04:16:12,768 INFO [StoreOpener-bcb43835975d4f00df4e228eb945f1fb-1] regionserver.HStore(310): Store=bcb43835975d4f00df4e228eb945f1fb/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:12,769 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:12,773 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bcb43835975d4f00df4e228eb945f1fb; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9514001280, jitterRate=-0.11393958330154419}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:12,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bcb43835975d4f00df4e228eb945f1fb: 2023-07-14 04:16:12,774 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb., pid=128, masterSystemTime=1689308172760 2023-07-14 04:16:12,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:12,776 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:12,776 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=bcb43835975d4f00df4e228eb945f1fb, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:12,776 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689308172776"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308172776"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308172776"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308172776"}]},"ts":"1689308172776"} 2023-07-14 04:16:12,779 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-14 04:16:12,779 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure bcb43835975d4f00df4e228eb945f1fb, server=jenkins-hbase4.apache.org,37557,1689308152906 in 170 msec 2023-07-14 04:16:12,780 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=bcb43835975d4f00df4e228eb945f1fb, REOPEN/MOVE in 502 msec 2023-07-14 04:16:13,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-14 04:16:13,278 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-14 04:16:13,278 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:13,280 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34763] to rsgroup default 2023-07-14 04:16:13,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-14 04:16:13,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:13,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:13,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 04:16:13,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:13,285 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-14 04:16:13,285 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34763,1689308149192] are moved back to normal 2023-07-14 04:16:13,285 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-14 04:16:13,285 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:13,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-14 04:16:13,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:13,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:13,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 04:16:13,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-14 04:16:13,293 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:13,293 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:13,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:13,294 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:13,294 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:13,294 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:13,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:13,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:13,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 04:16:13,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 04:16:13,302 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:13,304 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-14 04:16:13,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:13,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 04:16:13,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:13,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-14 04:16:13,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(345): Moving region 1533d61a3dc01181d37a5f58d846789c to RSGroup default 2023-07-14 04:16:13,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, REOPEN/MOVE 2023-07-14 04:16:13,309 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-14 04:16:13,309 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, REOPEN/MOVE 2023-07-14 04:16:13,309 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:16:13,309 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308173309"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308173309"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308173309"}]},"ts":"1689308173309"} 2023-07-14 04:16:13,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34609,1689308148721}] 2023-07-14 04:16:13,355 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-14 04:16:13,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1533d61a3dc01181d37a5f58d846789c, disabling compactions & flushes 2023-07-14 04:16:13,465 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:13,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:13,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. after waiting 0 ms 2023-07-14 04:16:13,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:13,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-14 04:16:13,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:13,472 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1533d61a3dc01181d37a5f58d846789c: 2023-07-14 04:16:13,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1533d61a3dc01181d37a5f58d846789c move to jenkins-hbase4.apache.org,34763,1689308149192 record at close sequenceid=5 2023-07-14 04:16:13,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,474 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=CLOSED 2023-07-14 04:16:13,474 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308173474"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308173474"}]},"ts":"1689308173474"} 2023-07-14 04:16:13,477 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-14 04:16:13,477 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34609,1689308148721 in 164 msec 2023-07-14 04:16:13,478 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:16:13,628 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:13,628 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:13,628 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308173628"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308173628"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308173628"}]},"ts":"1689308173628"} 2023-07-14 04:16:13,630 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:13,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:13,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1533d61a3dc01181d37a5f58d846789c, NAME => 'testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:13,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:13,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,788 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,789 DEBUG [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/tr 2023-07-14 04:16:13,789 DEBUG [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/tr 2023-07-14 04:16:13,791 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1533d61a3dc01181d37a5f58d846789c columnFamilyName tr 2023-07-14 04:16:13,792 INFO [StoreOpener-1533d61a3dc01181d37a5f58d846789c-1] regionserver.HStore(310): Store=1533d61a3dc01181d37a5f58d846789c/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:13,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:13,799 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1533d61a3dc01181d37a5f58d846789c; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10984400800, jitterRate=0.02300204336643219}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:13,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1533d61a3dc01181d37a5f58d846789c: 2023-07-14 04:16:13,800 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c., pid=131, masterSystemTime=1689308173781 2023-07-14 04:16:13,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:13,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:13,804 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=1533d61a3dc01181d37a5f58d846789c, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:13,804 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689308173804"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308173804"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308173804"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308173804"}]},"ts":"1689308173804"} 2023-07-14 04:16:13,808 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-14 04:16:13,808 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 1533d61a3dc01181d37a5f58d846789c, server=jenkins-hbase4.apache.org,34763,1689308149192 in 176 msec 2023-07-14 04:16:13,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=1533d61a3dc01181d37a5f58d846789c, REOPEN/MOVE in 500 msec 2023-07-14 04:16:14,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-14 04:16:14,309 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-14 04:16:14,309 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:14,310 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609] to rsgroup default 2023-07-14 04:16:14,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-14 04:16:14,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:14,315 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-14 04:16:14,315 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721] are moved back to newgroup 2023-07-14 04:16:14,315 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-14 04:16:14,315 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:14,316 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-14 04:16:14,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:14,321 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:14,324 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:14,324 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:14,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:14,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:14,333 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:14,336 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,336 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,338 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:14,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:14,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 764 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309374338, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:14,338 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:14,340 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:14,340 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,340 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,341 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:14,341 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:14,341 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:14,358 INFO [Listener at localhost/46681] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=507 (was 511), OpenFileDescriptor=780 (was 785), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=532 (was 561), ProcessCount=172 (was 172), AvailableMemoryMB=3941 (was 4094) 2023-07-14 04:16:14,358 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-14 04:16:14,374 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=507, OpenFileDescriptor=780, MaxFileDescriptor=60000, SystemLoadAverage=532, ProcessCount=172, AvailableMemoryMB=3941 2023-07-14 04:16:14,375 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-14 04:16:14,375 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-14 04:16:14,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:14,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:14,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:14,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:14,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:14,382 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:14,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:14,387 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:14,389 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:14,390 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:14,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,392 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:14,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:14,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:14,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:14,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:14,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 792 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309374401, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:14,402 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:14,403 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:14,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,405 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:14,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:14,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:14,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-14 04:16:14,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:14,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-14 04:16:14,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-14 04:16:14,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-14 04:16:14,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:14,415 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-14 04:16:14,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:14,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:60972 deadline: 1689309374415, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-14 04:16:14,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-14 04:16:14,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:14,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:60972 deadline: 1689309374417, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-14 04:16:14,420 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-14 04:16:14,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-14 04:16:14,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-14 04:16:14,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:14,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 811 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:60972 deadline: 1689309374425, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-14 04:16:14,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,430 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:14,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:14,431 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:14,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:14,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:14,433 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:14,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:14,438 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:14,443 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:14,444 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:14,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:14,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:14,452 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:14,456 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,456 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,459 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:14,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:14,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 835 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309374459, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:14,463 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:14,465 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:14,466 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,467 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,467 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:14,468 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:14,468 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:14,489 INFO [Listener at localhost/46681] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=511 (was 507) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6391c436-shared-pool-28 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=780 (was 780), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=532 (was 532), ProcessCount=172 (was 172), AvailableMemoryMB=3940 (was 3941) 2023-07-14 04:16:14,490 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-14 04:16:14,511 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=511, OpenFileDescriptor=780, MaxFileDescriptor=60000, SystemLoadAverage=532, ProcessCount=172, AvailableMemoryMB=3939 2023-07-14 04:16:14,511 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-14 04:16:14,512 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-14 04:16:14,516 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,516 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:14,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:14,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:14,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:14,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:14,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:14,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:14,524 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:14,526 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:14,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:14,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:14,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:14,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:14,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,535 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:14,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:14,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 863 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309374535, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:14,536 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:14,537 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:14,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,538 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:14,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:14,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:14,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:14,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:14,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1862909294 2023-07-14 04:16:14,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:14,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1862909294 2023-07-14 04:16:14,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:14,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:14,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,553 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609] to rsgroup Group_testDisabledTableMove_1862909294 2023-07-14 04:16:14,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:14,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1862909294 2023-07-14 04:16:14,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:14,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-14 04:16:14,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721] are moved back to default 2023-07-14 04:16:14,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1862909294 2023-07-14 04:16:14,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:14,559 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:14,559 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:14,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1862909294 2023-07-14 04:16:14,562 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:14,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:14,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-14 04:16:14,566 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:14,567 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-14 04:16:14,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 04:16:14,568 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:14,568 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:14,569 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1862909294 2023-07-14 04:16:14,569 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:14,571 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:14,575 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,575 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,575 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:14,575 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,575 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af 2023-07-14 04:16:14,576 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185 empty. 2023-07-14 04:16:14,576 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00 empty. 2023-07-14 04:16:14,576 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f empty. 2023-07-14 04:16:14,576 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0 empty. 2023-07-14 04:16:14,576 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af empty. 2023-07-14 04:16:14,576 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,576 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,576 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,577 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:14,577 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af 2023-07-14 04:16:14,577 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-14 04:16:14,594 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:14,596 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 248ac73ed84f325dc91b9d5fe3b3f76f, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:14,596 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 496db64b68cc336813db711c0434f2af, NAME => 'Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:14,596 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => ce8de8b5c75e3d72ccf6e0f39dbb9185, NAME => 'Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 248ac73ed84f325dc91b9d5fe3b3f76f, disabling compactions & flushes 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing ce8de8b5c75e3d72ccf6e0f39dbb9185, disabling compactions & flushes 2023-07-14 04:16:14,623 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:14,623 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. after waiting 0 ms 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. after waiting 0 ms 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:14,623 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:14,623 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for ce8de8b5c75e3d72ccf6e0f39dbb9185: 2023-07-14 04:16:14,624 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => d9b2457233e7ffc30f097d32d5174f00, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:14,623 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 248ac73ed84f325dc91b9d5fe3b3f76f: 2023-07-14 04:16:14,625 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9cade64e71b828e853c310f90fe39cc0, NAME => 'Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp 2023-07-14 04:16:14,626 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,626 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 496db64b68cc336813db711c0434f2af, disabling compactions & flushes 2023-07-14 04:16:14,626 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:14,626 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:14,626 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. after waiting 0 ms 2023-07-14 04:16:14,626 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:14,626 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:14,626 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 496db64b68cc336813db711c0434f2af: 2023-07-14 04:16:14,635 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,635 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing d9b2457233e7ffc30f097d32d5174f00, disabling compactions & flushes 2023-07-14 04:16:14,635 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:14,635 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:14,635 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. after waiting 0 ms 2023-07-14 04:16:14,635 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:14,635 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:14,635 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for d9b2457233e7ffc30f097d32d5174f00: 2023-07-14 04:16:14,639 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,639 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 9cade64e71b828e853c310f90fe39cc0, disabling compactions & flushes 2023-07-14 04:16:14,639 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:14,639 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:14,639 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. after waiting 0 ms 2023-07-14 04:16:14,639 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:14,639 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:14,639 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 9cade64e71b828e853c310f90fe39cc0: 2023-07-14 04:16:14,642 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:14,644 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308174643"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308174643"}]},"ts":"1689308174643"} 2023-07-14 04:16:14,644 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308174643"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308174643"}]},"ts":"1689308174643"} 2023-07-14 04:16:14,644 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308174643"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308174643"}]},"ts":"1689308174643"} 2023-07-14 04:16:14,644 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308174643"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308174643"}]},"ts":"1689308174643"} 2023-07-14 04:16:14,644 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308174643"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308174643"}]},"ts":"1689308174643"} 2023-07-14 04:16:14,646 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-14 04:16:14,647 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:14,647 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308174647"}]},"ts":"1689308174647"} 2023-07-14 04:16:14,648 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-14 04:16:14,651 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:14,652 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:14,652 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:14,652 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:14,652 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ce8de8b5c75e3d72ccf6e0f39dbb9185, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=496db64b68cc336813db711c0434f2af, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=248ac73ed84f325dc91b9d5fe3b3f76f, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d9b2457233e7ffc30f097d32d5174f00, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9cade64e71b828e853c310f90fe39cc0, ASSIGN}] 2023-07-14 04:16:14,654 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=248ac73ed84f325dc91b9d5fe3b3f76f, ASSIGN 2023-07-14 04:16:14,654 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d9b2457233e7ffc30f097d32d5174f00, ASSIGN 2023-07-14 04:16:14,654 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9cade64e71b828e853c310f90fe39cc0, ASSIGN 2023-07-14 04:16:14,654 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=496db64b68cc336813db711c0434f2af, ASSIGN 2023-07-14 04:16:14,655 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ce8de8b5c75e3d72ccf6e0f39dbb9185, ASSIGN 2023-07-14 04:16:14,655 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=248ac73ed84f325dc91b9d5fe3b3f76f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:16:14,655 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d9b2457233e7ffc30f097d32d5174f00, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37557,1689308152906; forceNewPlan=false, retain=false 2023-07-14 04:16:14,655 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9cade64e71b828e853c310f90fe39cc0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:16:14,655 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=496db64b68cc336813db711c0434f2af, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:16:14,655 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ce8de8b5c75e3d72ccf6e0f39dbb9185, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34763,1689308149192; forceNewPlan=false, retain=false 2023-07-14 04:16:14,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 04:16:14,805 INFO [jenkins-hbase4:34797] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-14 04:16:14,809 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=d9b2457233e7ffc30f097d32d5174f00, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:14,809 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=ce8de8b5c75e3d72ccf6e0f39dbb9185, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:14,809 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308174809"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308174809"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308174809"}]},"ts":"1689308174809"} 2023-07-14 04:16:14,809 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=248ac73ed84f325dc91b9d5fe3b3f76f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:14,809 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=496db64b68cc336813db711c0434f2af, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:14,809 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308174809"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308174809"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308174809"}]},"ts":"1689308174809"} 2023-07-14 04:16:14,809 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308174809"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308174809"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308174809"}]},"ts":"1689308174809"} 2023-07-14 04:16:14,809 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=9cade64e71b828e853c310f90fe39cc0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:14,809 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308174809"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308174809"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308174809"}]},"ts":"1689308174809"} 2023-07-14 04:16:14,809 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308174809"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308174809"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308174809"}]},"ts":"1689308174809"} 2023-07-14 04:16:14,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=136, state=RUNNABLE; OpenRegionProcedure d9b2457233e7ffc30f097d32d5174f00, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:14,812 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=135, state=RUNNABLE; OpenRegionProcedure 248ac73ed84f325dc91b9d5fe3b3f76f, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:14,813 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=134, state=RUNNABLE; OpenRegionProcedure 496db64b68cc336813db711c0434f2af, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:14,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=133, state=RUNNABLE; OpenRegionProcedure ce8de8b5c75e3d72ccf6e0f39dbb9185, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:14,815 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=137, state=RUNNABLE; OpenRegionProcedure 9cade64e71b828e853c310f90fe39cc0, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:14,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 04:16:14,967 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:14,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d9b2457233e7ffc30f097d32d5174f00, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-14 04:16:14,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,969 INFO [StoreOpener-d9b2457233e7ffc30f097d32d5174f00-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,971 DEBUG [StoreOpener-d9b2457233e7ffc30f097d32d5174f00-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00/f 2023-07-14 04:16:14,971 DEBUG [StoreOpener-d9b2457233e7ffc30f097d32d5174f00-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00/f 2023-07-14 04:16:14,972 INFO [StoreOpener-d9b2457233e7ffc30f097d32d5174f00-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d9b2457233e7ffc30f097d32d5174f00 columnFamilyName f 2023-07-14 04:16:14,972 INFO [StoreOpener-d9b2457233e7ffc30f097d32d5174f00-1] regionserver.HStore(310): Store=d9b2457233e7ffc30f097d32d5174f00/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:14,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:14,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ce8de8b5c75e3d72ccf6e0f39dbb9185, NAME => 'Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-14 04:16:14,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,974 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,975 INFO [StoreOpener-ce8de8b5c75e3d72ccf6e0f39dbb9185-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,976 DEBUG [StoreOpener-ce8de8b5c75e3d72ccf6e0f39dbb9185-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185/f 2023-07-14 04:16:14,977 DEBUG [StoreOpener-ce8de8b5c75e3d72ccf6e0f39dbb9185-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185/f 2023-07-14 04:16:14,977 INFO [StoreOpener-ce8de8b5c75e3d72ccf6e0f39dbb9185-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ce8de8b5c75e3d72ccf6e0f39dbb9185 columnFamilyName f 2023-07-14 04:16:14,977 INFO [StoreOpener-ce8de8b5c75e3d72ccf6e0f39dbb9185-1] regionserver.HStore(310): Store=ce8de8b5c75e3d72ccf6e0f39dbb9185/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:14,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,982 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:14,983 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:14,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:14,985 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ce8de8b5c75e3d72ccf6e0f39dbb9185; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11854730880, jitterRate=0.10405784845352173}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:14,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ce8de8b5c75e3d72ccf6e0f39dbb9185: 2023-07-14 04:16:14,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:14,986 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185., pid=141, masterSystemTime=1689308174969 2023-07-14 04:16:14,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d9b2457233e7ffc30f097d32d5174f00; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11824016960, jitterRate=0.10119739174842834}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:14,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d9b2457233e7ffc30f097d32d5174f00: 2023-07-14 04:16:14,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00., pid=138, masterSystemTime=1689308174963 2023-07-14 04:16:14,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:14,988 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:14,988 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:14,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9cade64e71b828e853c310f90fe39cc0, NAME => 'Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-14 04:16:14,988 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=ce8de8b5c75e3d72ccf6e0f39dbb9185, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:14,988 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308174988"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308174988"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308174988"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308174988"}]},"ts":"1689308174988"} 2023-07-14 04:16:14,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:14,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:14,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:14,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 248ac73ed84f325dc91b9d5fe3b3f76f, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-14 04:16:14,990 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=d9b2457233e7ffc30f097d32d5174f00, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:14,990 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308174989"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308174989"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308174989"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308174989"}]},"ts":"1689308174989"} 2023-07-14 04:16:14,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:14,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:14,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:14,990 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:14,990 INFO [StoreOpener-9cade64e71b828e853c310f90fe39cc0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,991 INFO [StoreOpener-248ac73ed84f325dc91b9d5fe3b3f76f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:14,992 DEBUG [StoreOpener-9cade64e71b828e853c310f90fe39cc0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0/f 2023-07-14 04:16:14,992 DEBUG [StoreOpener-9cade64e71b828e853c310f90fe39cc0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0/f 2023-07-14 04:16:14,992 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=133 2023-07-14 04:16:14,992 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=133, state=SUCCESS; OpenRegionProcedure ce8de8b5c75e3d72ccf6e0f39dbb9185, server=jenkins-hbase4.apache.org,34763,1689308149192 in 176 msec 2023-07-14 04:16:14,992 INFO [StoreOpener-9cade64e71b828e853c310f90fe39cc0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9cade64e71b828e853c310f90fe39cc0 columnFamilyName f 2023-07-14 04:16:14,993 DEBUG [StoreOpener-248ac73ed84f325dc91b9d5fe3b3f76f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f/f 2023-07-14 04:16:14,993 DEBUG [StoreOpener-248ac73ed84f325dc91b9d5fe3b3f76f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f/f 2023-07-14 04:16:14,993 INFO [StoreOpener-9cade64e71b828e853c310f90fe39cc0-1] regionserver.HStore(310): Store=9cade64e71b828e853c310f90fe39cc0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:14,993 INFO [StoreOpener-248ac73ed84f325dc91b9d5fe3b3f76f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 248ac73ed84f325dc91b9d5fe3b3f76f columnFamilyName f 2023-07-14 04:16:14,994 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=136 2023-07-14 04:16:14,994 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ce8de8b5c75e3d72ccf6e0f39dbb9185, ASSIGN in 340 msec 2023-07-14 04:16:14,994 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=136, state=SUCCESS; OpenRegionProcedure d9b2457233e7ffc30f097d32d5174f00, server=jenkins-hbase4.apache.org,37557,1689308152906 in 180 msec 2023-07-14 04:16:14,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,994 INFO [StoreOpener-248ac73ed84f325dc91b9d5fe3b3f76f-1] regionserver.HStore(310): Store=248ac73ed84f325dc91b9d5fe3b3f76f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:14,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,995 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d9b2457233e7ffc30f097d32d5174f00, ASSIGN in 342 msec 2023-07-14 04:16:14,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:14,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:14,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:14,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:15,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:15,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:15,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 248ac73ed84f325dc91b9d5fe3b3f76f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11572422080, jitterRate=0.07776579260826111}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:15,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9cade64e71b828e853c310f90fe39cc0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11575792320, jitterRate=0.07807967066764832}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:15,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 248ac73ed84f325dc91b9d5fe3b3f76f: 2023-07-14 04:16:15,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9cade64e71b828e853c310f90fe39cc0: 2023-07-14 04:16:15,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0., pid=142, masterSystemTime=1689308174969 2023-07-14 04:16:15,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f., pid=139, masterSystemTime=1689308174963 2023-07-14 04:16:15,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:15,002 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:15,002 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:15,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 496db64b68cc336813db711c0434f2af, NAME => 'Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-14 04:16:15,003 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=9cade64e71b828e853c310f90fe39cc0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:15,003 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308175003"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308175003"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308175003"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308175003"}]},"ts":"1689308175003"} 2023-07-14 04:16:15,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:15,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:15,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,003 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:15,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,003 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=248ac73ed84f325dc91b9d5fe3b3f76f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:15,004 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308175003"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308175003"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308175003"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308175003"}]},"ts":"1689308175003"} 2023-07-14 04:16:15,004 INFO [StoreOpener-496db64b68cc336813db711c0434f2af-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,006 DEBUG [StoreOpener-496db64b68cc336813db711c0434f2af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af/f 2023-07-14 04:16:15,006 DEBUG [StoreOpener-496db64b68cc336813db711c0434f2af-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af/f 2023-07-14 04:16:15,006 INFO [StoreOpener-496db64b68cc336813db711c0434f2af-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 496db64b68cc336813db711c0434f2af columnFamilyName f 2023-07-14 04:16:15,007 INFO [StoreOpener-496db64b68cc336813db711c0434f2af-1] regionserver.HStore(310): Store=496db64b68cc336813db711c0434f2af/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:15,007 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=137 2023-07-14 04:16:15,007 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; OpenRegionProcedure 9cade64e71b828e853c310f90fe39cc0, server=jenkins-hbase4.apache.org,34763,1689308149192 in 189 msec 2023-07-14 04:16:15,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=135 2023-07-14 04:16:15,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; OpenRegionProcedure 248ac73ed84f325dc91b9d5fe3b3f76f, server=jenkins-hbase4.apache.org,37557,1689308152906 in 193 msec 2023-07-14 04:16:15,008 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9cade64e71b828e853c310f90fe39cc0, ASSIGN in 355 msec 2023-07-14 04:16:15,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,009 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=248ac73ed84f325dc91b9d5fe3b3f76f, ASSIGN in 356 msec 2023-07-14 04:16:15,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:15,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 496db64b68cc336813db711c0434f2af; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10639615520, jitterRate=-0.009108588099479675}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:15,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 496db64b68cc336813db711c0434f2af: 2023-07-14 04:16:15,014 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af., pid=140, masterSystemTime=1689308174969 2023-07-14 04:16:15,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:15,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:15,015 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=496db64b68cc336813db711c0434f2af, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:15,015 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308175015"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308175015"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308175015"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308175015"}]},"ts":"1689308175015"} 2023-07-14 04:16:15,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=134 2023-07-14 04:16:15,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=134, state=SUCCESS; OpenRegionProcedure 496db64b68cc336813db711c0434f2af, server=jenkins-hbase4.apache.org,34763,1689308149192 in 203 msec 2023-07-14 04:16:15,019 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-14 04:16:15,019 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=496db64b68cc336813db711c0434f2af, ASSIGN in 365 msec 2023-07-14 04:16:15,019 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:15,019 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308175019"}]},"ts":"1689308175019"} 2023-07-14 04:16:15,020 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-14 04:16:15,022 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:15,024 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 459 msec 2023-07-14 04:16:15,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-14 04:16:15,171 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-14 04:16:15,172 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-14 04:16:15,172 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:15,187 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-14 04:16:15,188 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:15,188 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-14 04:16:15,188 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:15,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-14 04:16:15,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:15,197 INFO [Listener at localhost/46681] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-14 04:16:15,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-14 04:16:15,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-14 04:16:15,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-14 04:16:15,211 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308175211"}]},"ts":"1689308175211"} 2023-07-14 04:16:15,212 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-14 04:16:15,216 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-14 04:16:15,217 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ce8de8b5c75e3d72ccf6e0f39dbb9185, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=496db64b68cc336813db711c0434f2af, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=248ac73ed84f325dc91b9d5fe3b3f76f, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d9b2457233e7ffc30f097d32d5174f00, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9cade64e71b828e853c310f90fe39cc0, UNASSIGN}] 2023-07-14 04:16:15,219 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=496db64b68cc336813db711c0434f2af, UNASSIGN 2023-07-14 04:16:15,219 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=248ac73ed84f325dc91b9d5fe3b3f76f, UNASSIGN 2023-07-14 04:16:15,220 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ce8de8b5c75e3d72ccf6e0f39dbb9185, UNASSIGN 2023-07-14 04:16:15,220 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9cade64e71b828e853c310f90fe39cc0, UNASSIGN 2023-07-14 04:16:15,220 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d9b2457233e7ffc30f097d32d5174f00, UNASSIGN 2023-07-14 04:16:15,220 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=496db64b68cc336813db711c0434f2af, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:15,220 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=248ac73ed84f325dc91b9d5fe3b3f76f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:15,220 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308175220"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308175220"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308175220"}]},"ts":"1689308175220"} 2023-07-14 04:16:15,221 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308175220"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308175220"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308175220"}]},"ts":"1689308175220"} 2023-07-14 04:16:15,220 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=ce8de8b5c75e3d72ccf6e0f39dbb9185, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:15,221 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308175220"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308175220"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308175220"}]},"ts":"1689308175220"} 2023-07-14 04:16:15,221 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=9cade64e71b828e853c310f90fe39cc0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:15,221 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308175221"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308175221"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308175221"}]},"ts":"1689308175221"} 2023-07-14 04:16:15,221 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=d9b2457233e7ffc30f097d32d5174f00, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:15,222 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308175221"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308175221"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308175221"}]},"ts":"1689308175221"} 2023-07-14 04:16:15,222 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=145, state=RUNNABLE; CloseRegionProcedure 496db64b68cc336813db711c0434f2af, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:15,223 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=146, state=RUNNABLE; CloseRegionProcedure 248ac73ed84f325dc91b9d5fe3b3f76f, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:15,224 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=144, state=RUNNABLE; CloseRegionProcedure ce8de8b5c75e3d72ccf6e0f39dbb9185, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:15,225 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=148, state=RUNNABLE; CloseRegionProcedure 9cade64e71b828e853c310f90fe39cc0, server=jenkins-hbase4.apache.org,34763,1689308149192}] 2023-07-14 04:16:15,226 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=147, state=RUNNABLE; CloseRegionProcedure d9b2457233e7ffc30f097d32d5174f00, server=jenkins-hbase4.apache.org,37557,1689308152906}] 2023-07-14 04:16:15,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-14 04:16:15,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:15,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9cade64e71b828e853c310f90fe39cc0, disabling compactions & flushes 2023-07-14 04:16:15,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:15,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:15,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. after waiting 0 ms 2023-07-14 04:16:15,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:15,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:15,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 248ac73ed84f325dc91b9d5fe3b3f76f, disabling compactions & flushes 2023-07-14 04:16:15,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:15,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:15,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. after waiting 0 ms 2023-07-14 04:16:15,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:15,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:15,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:15,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f. 2023-07-14 04:16:15,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0. 2023-07-14 04:16:15,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9cade64e71b828e853c310f90fe39cc0: 2023-07-14 04:16:15,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 248ac73ed84f325dc91b9d5fe3b3f76f: 2023-07-14 04:16:15,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:15,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,390 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=9cade64e71b828e853c310f90fe39cc0, regionState=CLOSED 2023-07-14 04:16:15,390 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308175390"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308175390"}]},"ts":"1689308175390"} 2023-07-14 04:16:15,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:15,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:15,391 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=248ac73ed84f325dc91b9d5fe3b3f76f, regionState=CLOSED 2023-07-14 04:16:15,391 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308175391"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308175391"}]},"ts":"1689308175391"} 2023-07-14 04:16:15,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=148 2023-07-14 04:16:15,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=148, state=SUCCESS; CloseRegionProcedure 9cade64e71b828e853c310f90fe39cc0, server=jenkins-hbase4.apache.org,34763,1689308149192 in 167 msec 2023-07-14 04:16:15,395 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9cade64e71b828e853c310f90fe39cc0, UNASSIGN in 176 msec 2023-07-14 04:16:15,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d9b2457233e7ffc30f097d32d5174f00, disabling compactions & flushes 2023-07-14 04:16:15,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 496db64b68cc336813db711c0434f2af, disabling compactions & flushes 2023-07-14 04:16:15,395 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=146 2023-07-14 04:16:15,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:15,396 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; CloseRegionProcedure 248ac73ed84f325dc91b9d5fe3b3f76f, server=jenkins-hbase4.apache.org,37557,1689308152906 in 170 msec 2023-07-14 04:16:15,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:15,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:15,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:15,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. after waiting 0 ms 2023-07-14 04:16:15,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:15,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. after waiting 0 ms 2023-07-14 04:16:15,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:15,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=248ac73ed84f325dc91b9d5fe3b3f76f, UNASSIGN in 178 msec 2023-07-14 04:16:15,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:15,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:15,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af. 2023-07-14 04:16:15,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 496db64b68cc336813db711c0434f2af: 2023-07-14 04:16:15,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00. 2023-07-14 04:16:15,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d9b2457233e7ffc30f097d32d5174f00: 2023-07-14 04:16:15,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:15,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ce8de8b5c75e3d72ccf6e0f39dbb9185, disabling compactions & flushes 2023-07-14 04:16:15,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:15,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:15,403 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=496db64b68cc336813db711c0434f2af, regionState=CLOSED 2023-07-14 04:16:15,403 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308175403"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308175403"}]},"ts":"1689308175403"} 2023-07-14 04:16:15,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:15,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. after waiting 0 ms 2023-07-14 04:16:15,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:15,403 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=d9b2457233e7ffc30f097d32d5174f00, regionState=CLOSED 2023-07-14 04:16:15,404 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689308175403"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308175403"}]},"ts":"1689308175403"} 2023-07-14 04:16:15,406 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=145 2023-07-14 04:16:15,406 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=145, state=SUCCESS; CloseRegionProcedure 496db64b68cc336813db711c0434f2af, server=jenkins-hbase4.apache.org,34763,1689308149192 in 182 msec 2023-07-14 04:16:15,407 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=147 2023-07-14 04:16:15,407 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=147, state=SUCCESS; CloseRegionProcedure d9b2457233e7ffc30f097d32d5174f00, server=jenkins-hbase4.apache.org,37557,1689308152906 in 179 msec 2023-07-14 04:16:15,407 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=496db64b68cc336813db711c0434f2af, UNASSIGN in 189 msec 2023-07-14 04:16:15,407 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d9b2457233e7ffc30f097d32d5174f00, UNASSIGN in 190 msec 2023-07-14 04:16:15,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:15,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185. 2023-07-14 04:16:15,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ce8de8b5c75e3d72ccf6e0f39dbb9185: 2023-07-14 04:16:15,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:15,421 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=ce8de8b5c75e3d72ccf6e0f39dbb9185, regionState=CLOSED 2023-07-14 04:16:15,421 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689308175421"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308175421"}]},"ts":"1689308175421"} 2023-07-14 04:16:15,424 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=144 2023-07-14 04:16:15,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=144, state=SUCCESS; CloseRegionProcedure ce8de8b5c75e3d72ccf6e0f39dbb9185, server=jenkins-hbase4.apache.org,34763,1689308149192 in 199 msec 2023-07-14 04:16:15,426 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=143 2023-07-14 04:16:15,426 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=ce8de8b5c75e3d72ccf6e0f39dbb9185, UNASSIGN in 208 msec 2023-07-14 04:16:15,427 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308175427"}]},"ts":"1689308175427"} 2023-07-14 04:16:15,428 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-14 04:16:15,431 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-14 04:16:15,432 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 233 msec 2023-07-14 04:16:15,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-14 04:16:15,504 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-14 04:16:15,504 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1862909294 2023-07-14 04:16:15,506 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1862909294 2023-07-14 04:16:15,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:15,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:15,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1862909294 2023-07-14 04:16:15,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:15,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-14 04:16:15,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1862909294, current retry=0 2023-07-14 04:16:15,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1862909294. 2023-07-14 04:16:15,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:15,514 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:15,514 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:15,516 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-14 04:16:15,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:15,518 INFO [Listener at localhost/46681] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-14 04:16:15,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-14 04:16:15,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:15,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 923 service: MasterService methodName: DisableTable size: 87 connection: 172.31.14.131:60972 deadline: 1689308235519, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-14 04:16:15,520 DEBUG [Listener at localhost/46681] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-14 04:16:15,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-14 04:16:15,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 04:16:15,523 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 04:16:15,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1862909294' 2023-07-14 04:16:15,524 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 04:16:15,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:15,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:15,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1862909294 2023-07-14 04:16:15,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:15,531 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:15,531 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:15,531 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:15,531 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-14 04:16:15,532 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:15,535 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185/recovered.edits] 2023-07-14 04:16:15,535 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f/recovered.edits] 2023-07-14 04:16:15,537 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af/recovered.edits] 2023-07-14 04:16:15,537 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00/recovered.edits] 2023-07-14 04:16:15,537 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0/f, FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0/recovered.edits] 2023-07-14 04:16:15,547 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f/recovered.edits/4.seqid 2023-07-14 04:16:15,548 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af/recovered.edits/4.seqid 2023-07-14 04:16:15,551 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185/recovered.edits/4.seqid 2023-07-14 04:16:15,551 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/496db64b68cc336813db711c0434f2af 2023-07-14 04:16:15,551 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/248ac73ed84f325dc91b9d5fe3b3f76f 2023-07-14 04:16:15,551 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00/recovered.edits/4.seqid 2023-07-14 04:16:15,552 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/ce8de8b5c75e3d72ccf6e0f39dbb9185 2023-07-14 04:16:15,552 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0/recovered.edits/4.seqid to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/archive/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0/recovered.edits/4.seqid 2023-07-14 04:16:15,552 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/d9b2457233e7ffc30f097d32d5174f00 2023-07-14 04:16:15,553 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/.tmp/data/default/Group_testDisabledTableMove/9cade64e71b828e853c310f90fe39cc0 2023-07-14 04:16:15,553 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-14 04:16:15,556 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 04:16:15,559 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-14 04:16:15,564 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-14 04:16:15,565 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 04:16:15,565 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-14 04:16:15,566 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308175565"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:15,566 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308175565"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:15,566 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308175565"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:15,566 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308175565"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:15,566 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308175565"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:15,569 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-14 04:16:15,569 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ce8de8b5c75e3d72ccf6e0f39dbb9185, NAME => 'Group_testDisabledTableMove,,1689308174563.ce8de8b5c75e3d72ccf6e0f39dbb9185.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 496db64b68cc336813db711c0434f2af, NAME => 'Group_testDisabledTableMove,aaaaa,1689308174563.496db64b68cc336813db711c0434f2af.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 248ac73ed84f325dc91b9d5fe3b3f76f, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689308174563.248ac73ed84f325dc91b9d5fe3b3f76f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => d9b2457233e7ffc30f097d32d5174f00, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689308174563.d9b2457233e7ffc30f097d32d5174f00.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9cade64e71b828e853c310f90fe39cc0, NAME => 'Group_testDisabledTableMove,zzzzz,1689308174563.9cade64e71b828e853c310f90fe39cc0.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-14 04:16:15,569 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-14 04:16:15,569 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689308175569"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:15,571 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-14 04:16:15,572 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-14 04:16:15,573 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 51 msec 2023-07-14 04:16:15,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-14 04:16:15,633 INFO [Listener at localhost/46681] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-14 04:16:15,638 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:15,638 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:15,639 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:15,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:15,640 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:15,642 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:15,642 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:15,643 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:15,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:15,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1862909294 2023-07-14 04:16:15,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 04:16:15,653 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:15,654 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:15,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:15,654 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:15,655 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609] to rsgroup default 2023-07-14 04:16:15,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:15,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1862909294 2023-07-14 04:16:15,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:15,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1862909294, current retry=0 2023-07-14 04:16:15,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33827,1689308148910, jenkins-hbase4.apache.org,34609,1689308148721] are moved back to Group_testDisabledTableMove_1862909294 2023-07-14 04:16:15,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1862909294 => default 2023-07-14 04:16:15,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:15,660 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1862909294 2023-07-14 04:16:15,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:15,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:15,664 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:15,667 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:15,667 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:15,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:15,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:15,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:15,674 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:15,676 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:15,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:15,678 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:15,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:15,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 957 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309375678, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:15,679 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:15,680 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:15,681 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:15,681 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:15,681 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:15,682 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:15,682 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:15,700 INFO [Listener at localhost/46681] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=515 (was 511) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6ac6849-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2124810990_17 at /127.0.0.1:39424 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1827064815_17 at /127.0.0.1:42298 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x11ddf8cf-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=798 (was 780) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=532 (was 532), ProcessCount=172 (was 172), AvailableMemoryMB=3924 (was 3939) 2023-07-14 04:16:15,702 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-14 04:16:15,721 INFO [Listener at localhost/46681] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=515, OpenFileDescriptor=798, MaxFileDescriptor=60000, SystemLoadAverage=532, ProcessCount=172, AvailableMemoryMB=3923 2023-07-14 04:16:15,722 WARN [Listener at localhost/46681] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-14 04:16:15,722 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-14 04:16:15,728 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:15,728 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:15,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:15,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:15,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:15,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:15,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:15,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:15,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:15,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:15,739 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:15,742 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:15,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:15,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:15,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:15,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:15,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:15,754 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:15,754 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:15,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34797] to rsgroup master 2023-07-14 04:16:15,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:15,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] ipc.CallRunner(144): callId: 985 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:60972 deadline: 1689309375756, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. 2023-07-14 04:16:15,756 WARN [Listener at localhost/46681] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34797 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:15,758 INFO [Listener at localhost/46681] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:15,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:15,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:15,759 INFO [Listener at localhost/46681] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33827, jenkins-hbase4.apache.org:34609, jenkins-hbase4.apache.org:34763, jenkins-hbase4.apache.org:37557], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:15,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:15,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34797] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:15,761 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-14 04:16:15,761 INFO [Listener at localhost/46681] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-14 04:16:15,761 DEBUG [Listener at localhost/46681] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02b3de72 to 127.0.0.1:56534 2023-07-14 04:16:15,761 DEBUG [Listener at localhost/46681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:15,763 DEBUG [Listener at localhost/46681] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-14 04:16:15,763 DEBUG [Listener at localhost/46681] util.JVMClusterUtil(257): Found active master hash=2000143576, stopped=false 2023-07-14 04:16:15,764 DEBUG [Listener at localhost/46681] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 04:16:15,764 DEBUG [Listener at localhost/46681] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 04:16:15,764 INFO [Listener at localhost/46681] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:16:15,765 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:15,765 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:15,766 INFO [Listener at localhost/46681] procedure2.ProcedureExecutor(629): Stopping 2023-07-14 04:16:15,766 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:15,766 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:15,766 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:15,766 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:15,766 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:15,766 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:15,766 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:15,767 DEBUG [Listener at localhost/46681] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x685da169 to 127.0.0.1:56534 2023-07-14 04:16:15,766 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:15,766 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:15,767 DEBUG [Listener at localhost/46681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:15,767 INFO [Listener at localhost/46681] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34609,1689308148721' ***** 2023-07-14 04:16:15,767 INFO [Listener at localhost/46681] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:15,767 INFO [Listener at localhost/46681] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33827,1689308148910' ***** 2023-07-14 04:16:15,767 INFO [Listener at localhost/46681] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:15,767 INFO [Listener at localhost/46681] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34763,1689308149192' ***** 2023-07-14 04:16:15,767 INFO [Listener at localhost/46681] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:15,767 INFO [Listener at localhost/46681] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37557,1689308152906' ***** 2023-07-14 04:16:15,768 INFO [Listener at localhost/46681] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:15,767 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:15,768 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:15,767 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:15,767 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:15,783 INFO [RS:0;jenkins-hbase4:34609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@720179eb{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:15,783 INFO [RS:3;jenkins-hbase4:37557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@45ea4e7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:15,783 INFO [RS:1;jenkins-hbase4:33827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2c96a714{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:15,783 INFO [RS:2;jenkins-hbase4:34763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5965958b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:15,787 INFO [RS:1;jenkins-hbase4:33827] server.AbstractConnector(383): Stopped ServerConnector@74bbd774{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:15,787 INFO [RS:2;jenkins-hbase4:34763] server.AbstractConnector(383): Stopped ServerConnector@5f48ab6c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:15,787 INFO [RS:1;jenkins-hbase4:33827] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:15,787 INFO [RS:2;jenkins-hbase4:34763] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:15,787 INFO [RS:0;jenkins-hbase4:34609] server.AbstractConnector(383): Stopped ServerConnector@4aa89d1f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:15,788 INFO [RS:1;jenkins-hbase4:33827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64f476b1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:15,789 INFO [RS:3;jenkins-hbase4:37557] server.AbstractConnector(383): Stopped ServerConnector@53265acb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:15,789 INFO [RS:0;jenkins-hbase4:34609] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:15,789 INFO [RS:2;jenkins-hbase4:34763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@145f3cb8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:15,789 INFO [RS:1;jenkins-hbase4:33827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ec386b4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:15,791 INFO [RS:2;jenkins-hbase4:34763] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5526bfb1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:15,789 INFO [RS:3;jenkins-hbase4:37557] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:15,790 INFO [RS:0;jenkins-hbase4:34609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c6f9d30{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:15,791 INFO [RS:3;jenkins-hbase4:37557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b0143bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:15,792 INFO [RS:0;jenkins-hbase4:34609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a6a072{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:15,793 INFO [RS:3;jenkins-hbase4:37557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@593950e0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:15,795 INFO [RS:3;jenkins-hbase4:37557] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:15,795 INFO [RS:0;jenkins-hbase4:34609] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:15,795 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:15,795 INFO [RS:2;jenkins-hbase4:34763] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:15,795 INFO [RS:0;jenkins-hbase4:34609] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:15,796 INFO [RS:2;jenkins-hbase4:34763] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:15,796 INFO [RS:0;jenkins-hbase4:34609] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:15,796 INFO [RS:3;jenkins-hbase4:37557] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:15,796 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:16:15,795 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:15,796 INFO [RS:3;jenkins-hbase4:37557] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:15,796 DEBUG [RS:0;jenkins-hbase4:34609] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4339e289 to 127.0.0.1:56534 2023-07-14 04:16:15,796 INFO [RS:2;jenkins-hbase4:34763] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:15,796 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:15,796 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(3305): Received CLOSE for 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:15,796 INFO [RS:1;jenkins-hbase4:33827] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:15,796 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(3305): Received CLOSE for 73c3c960f2db2f2a26d94c9444d65972 2023-07-14 04:16:15,797 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:15,797 INFO [RS:1;jenkins-hbase4:33827] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:15,797 INFO [RS:1;jenkins-hbase4:33827] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:15,797 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:15,796 DEBUG [RS:0;jenkins-hbase4:34609] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:15,798 DEBUG [RS:1;jenkins-hbase4:33827] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4d3fc16c to 127.0.0.1:56534 2023-07-14 04:16:15,798 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34609,1689308148721; all regions closed. 2023-07-14 04:16:15,798 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(3305): Received CLOSE for bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:15,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73c3c960f2db2f2a26d94c9444d65972, disabling compactions & flushes 2023-07-14 04:16:15,798 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(3305): Received CLOSE for 75377afadc385c92d6b322193a5c5a3e 2023-07-14 04:16:15,798 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:15,798 DEBUG [RS:3;jenkins-hbase4:37557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x694ffec5 to 127.0.0.1:56534 2023-07-14 04:16:15,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1533d61a3dc01181d37a5f58d846789c, disabling compactions & flushes 2023-07-14 04:16:15,797 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:15,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:15,798 DEBUG [RS:2;jenkins-hbase4:34763] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0b89a956 to 127.0.0.1:56534 2023-07-14 04:16:15,798 DEBUG [RS:3;jenkins-hbase4:37557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:15,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:15,798 INFO [RS:3;jenkins-hbase4:37557] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:15,799 INFO [RS:3;jenkins-hbase4:37557] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:15,798 DEBUG [RS:1;jenkins-hbase4:33827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:15,799 INFO [RS:3;jenkins-hbase4:37557] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:15,799 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-14 04:16:15,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:15,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. after waiting 0 ms 2023-07-14 04:16:15,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:15,798 DEBUG [RS:2;jenkins-hbase4:34763] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:15,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:15,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. after waiting 0 ms 2023-07-14 04:16:15,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:15,799 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-14 04:16:15,799 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-14 04:16:15,799 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 04:16:15,799 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33827,1689308148910; all regions closed. 2023-07-14 04:16:15,800 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 04:16:15,799 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1478): Online Regions={1533d61a3dc01181d37a5f58d846789c=testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c.} 2023-07-14 04:16:15,799 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1478): Online Regions={73c3c960f2db2f2a26d94c9444d65972=hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972., 1588230740=hbase:meta,,1.1588230740, bcb43835975d4f00df4e228eb945f1fb=unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb., 75377afadc385c92d6b322193a5c5a3e=hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e.} 2023-07-14 04:16:15,800 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 04:16:15,800 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 04:16:15,800 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 04:16:15,800 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.48 KB heapSize=61.13 KB 2023-07-14 04:16:15,800 DEBUG [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1504): Waiting on 1533d61a3dc01181d37a5f58d846789c 2023-07-14 04:16:15,800 DEBUG [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1504): Waiting on 1588230740, 73c3c960f2db2f2a26d94c9444d65972, 75377afadc385c92d6b322193a5c5a3e, bcb43835975d4f00df4e228eb945f1fb 2023-07-14 04:16:15,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/namespace/73c3c960f2db2f2a26d94c9444d65972/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-14 04:16:15,832 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:15,832 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:15,832 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:15,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:15,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73c3c960f2db2f2a26d94c9444d65972: 2023-07-14 04:16:15,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689308151962.73c3c960f2db2f2a26d94c9444d65972. 2023-07-14 04:16:15,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bcb43835975d4f00df4e228eb945f1fb, disabling compactions & flushes 2023-07-14 04:16:15,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/testRename/1533d61a3dc01181d37a5f58d846789c/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-14 04:16:15,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:15,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:15,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. after waiting 0 ms 2023-07-14 04:16:15,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:15,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:15,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1533d61a3dc01181d37a5f58d846789c: 2023-07-14 04:16:15,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689308168953.1533d61a3dc01181d37a5f58d846789c. 2023-07-14 04:16:15,842 DEBUG [RS:1;jenkins-hbase4:33827] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs 2023-07-14 04:16:15,842 INFO [RS:1;jenkins-hbase4:33827] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33827%2C1689308148910:(num 1689308151515) 2023-07-14 04:16:15,842 DEBUG [RS:1;jenkins-hbase4:33827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:15,842 INFO [RS:1;jenkins-hbase4:33827] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:15,846 INFO [RS:1;jenkins-hbase4:33827] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:15,846 INFO [RS:1;jenkins-hbase4:33827] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:15,846 INFO [RS:1;jenkins-hbase4:33827] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:15,846 INFO [RS:1;jenkins-hbase4:33827] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:15,847 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:15,847 INFO [RS:1;jenkins-hbase4:33827] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33827 2023-07-14 04:16:15,848 DEBUG [RS:0;jenkins-hbase4:34609] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs 2023-07-14 04:16:15,848 INFO [RS:0;jenkins-hbase4:34609] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34609%2C1689308148721:(num 1689308151515) 2023-07-14 04:16:15,848 DEBUG [RS:0;jenkins-hbase4:34609] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:15,848 INFO [RS:0;jenkins-hbase4:34609] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:15,850 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:15,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/default/unmovedTable/bcb43835975d4f00df4e228eb945f1fb/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-14 04:16:15,853 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:15,853 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bcb43835975d4f00df4e228eb945f1fb: 2023-07-14 04:16:15,853 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689308170613.bcb43835975d4f00df4e228eb945f1fb. 2023-07-14 04:16:15,853 INFO [RS:0;jenkins-hbase4:34609] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:15,853 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 75377afadc385c92d6b322193a5c5a3e, disabling compactions & flushes 2023-07-14 04:16:15,854 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:15,854 INFO [RS:0;jenkins-hbase4:34609] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:15,855 INFO [RS:0;jenkins-hbase4:34609] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:15,855 INFO [RS:0;jenkins-hbase4:34609] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:15,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:16:15,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:16:15,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. after waiting 0 ms 2023-07-14 04:16:15,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:16:15,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 75377afadc385c92d6b322193a5c5a3e 1/1 column families, dataSize=27.08 KB heapSize=44.60 KB 2023-07-14 04:16:15,856 INFO [RS:0;jenkins-hbase4:34609] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34609 2023-07-14 04:16:15,864 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:15,864 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:15,864 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:15,864 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:15,864 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:15,864 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:15,866 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:15,866 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33827,1689308148910 2023-07-14 04:16:15,867 ERROR [Listener at localhost/46681-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@63cc4063 rejected from java.util.concurrent.ThreadPoolExecutor@67f3c8c2[Terminated, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-14 04:16:15,867 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:15,868 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.56 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/info/fddfc53bb7914d9fb1f39a8a6f4e9285 2023-07-14 04:16:15,876 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:16:15,876 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33827,1689308148910] 2023-07-14 04:16:15,876 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:16:15,876 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:16:15,877 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33827,1689308148910; numProcessing=1 2023-07-14 04:16:15,877 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fddfc53bb7914d9fb1f39a8a6f4e9285 2023-07-14 04:16:15,877 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34609,1689308148721 2023-07-14 04:16:15,878 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33827,1689308148910 already deleted, retry=false 2023-07-14 04:16:15,878 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33827,1689308148910 expired; onlineServers=3 2023-07-14 04:16:15,878 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34609,1689308148721] 2023-07-14 04:16:15,878 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34609,1689308148721; numProcessing=2 2023-07-14 04:16:15,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.08 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/.tmp/m/6b7843cb5b904e47978cb971990272cb 2023-07-14 04:16:15,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6b7843cb5b904e47978cb971990272cb 2023-07-14 04:16:15,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/.tmp/m/6b7843cb5b904e47978cb971990272cb as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m/6b7843cb5b904e47978cb971990272cb 2023-07-14 04:16:15,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6b7843cb5b904e47978cb971990272cb 2023-07-14 04:16:15,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/m/6b7843cb5b904e47978cb971990272cb, entries=28, sequenceid=101, filesize=6.1 K 2023-07-14 04:16:15,918 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.08 KB/27733, heapSize ~44.59 KB/45656, currentSize=0 B/0 for 75377afadc385c92d6b322193a5c5a3e in 63ms, sequenceid=101, compaction requested=false 2023-07-14 04:16:15,925 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/rep_barrier/7309f71ccece4f34943a092de0074f8f 2023-07-14 04:16:15,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/rsgroup/75377afadc385c92d6b322193a5c5a3e/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-14 04:16:15,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:16:15,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:16:15,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 75377afadc385c92d6b322193a5c5a3e: 2023-07-14 04:16:15,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689308151998.75377afadc385c92d6b322193a5c5a3e. 2023-07-14 04:16:15,931 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7309f71ccece4f34943a092de0074f8f 2023-07-14 04:16:15,946 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/table/a57815842478495fb8592a5ea9b7fa8a 2023-07-14 04:16:15,952 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a57815842478495fb8592a5ea9b7fa8a 2023-07-14 04:16:15,953 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/info/fddfc53bb7914d9fb1f39a8a6f4e9285 as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info/fddfc53bb7914d9fb1f39a8a6f4e9285 2023-07-14 04:16:15,961 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fddfc53bb7914d9fb1f39a8a6f4e9285 2023-07-14 04:16:15,961 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/info/fddfc53bb7914d9fb1f39a8a6f4e9285, entries=62, sequenceid=210, filesize=11.9 K 2023-07-14 04:16:15,962 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/rep_barrier/7309f71ccece4f34943a092de0074f8f as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier/7309f71ccece4f34943a092de0074f8f 2023-07-14 04:16:15,968 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7309f71ccece4f34943a092de0074f8f 2023-07-14 04:16:15,968 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/rep_barrier/7309f71ccece4f34943a092de0074f8f, entries=8, sequenceid=210, filesize=5.8 K 2023-07-14 04:16:15,969 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/.tmp/table/a57815842478495fb8592a5ea9b7fa8a as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table/a57815842478495fb8592a5ea9b7fa8a 2023-07-14 04:16:15,975 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a57815842478495fb8592a5ea9b7fa8a 2023-07-14 04:16:15,975 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/table/a57815842478495fb8592a5ea9b7fa8a, entries=16, sequenceid=210, filesize=6.0 K 2023-07-14 04:16:15,976 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.48 KB/38382, heapSize ~61.08 KB/62544, currentSize=0 B/0 for 1588230740 in 176ms, sequenceid=210, compaction requested=false 2023-07-14 04:16:15,977 INFO [RS:1;jenkins-hbase4:33827] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33827,1689308148910; zookeeper connection closed. 2023-07-14 04:16:15,977 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:15,977 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:33827-0x101620b2b570002, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:15,978 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@643d1387] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@643d1387 2023-07-14 04:16:15,982 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34609,1689308148721 already deleted, retry=false 2023-07-14 04:16:15,982 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34609,1689308148721 expired; onlineServers=2 2023-07-14 04:16:15,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=98 2023-07-14 04:16:15,993 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:16:15,994 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 04:16:15,994 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 04:16:15,994 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-14 04:16:16,001 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34763,1689308149192; all regions closed. 2023-07-14 04:16:16,001 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37557,1689308152906; all regions closed. 2023-07-14 04:16:16,009 DEBUG [RS:2;jenkins-hbase4:34763] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs 2023-07-14 04:16:16,009 DEBUG [RS:3;jenkins-hbase4:37557] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs 2023-07-14 04:16:16,009 INFO [RS:2;jenkins-hbase4:34763] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34763%2C1689308149192.meta:.meta(num 1689308151718) 2023-07-14 04:16:16,009 INFO [RS:3;jenkins-hbase4:37557] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37557%2C1689308152906.meta:.meta(num 1689308160028) 2023-07-14 04:16:16,018 DEBUG [RS:3;jenkins-hbase4:37557] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs 2023-07-14 04:16:16,018 INFO [RS:3;jenkins-hbase4:37557] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37557%2C1689308152906:(num 1689308153356) 2023-07-14 04:16:16,018 DEBUG [RS:3;jenkins-hbase4:37557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:16,018 INFO [RS:3;jenkins-hbase4:37557] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:16,018 INFO [RS:3;jenkins-hbase4:37557] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:16,019 INFO [RS:3;jenkins-hbase4:37557] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37557 2023-07-14 04:16:16,019 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:16,020 DEBUG [RS:2;jenkins-hbase4:34763] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/oldWALs 2023-07-14 04:16:16,020 INFO [RS:2;jenkins-hbase4:34763] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34763%2C1689308149192:(num 1689308151515) 2023-07-14 04:16:16,020 DEBUG [RS:2;jenkins-hbase4:34763] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:16,020 INFO [RS:2;jenkins-hbase4:34763] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:16,020 INFO [RS:2;jenkins-hbase4:34763] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:16,020 INFO [RS:2;jenkins-hbase4:34763] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:16,020 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:16,020 INFO [RS:2;jenkins-hbase4:34763] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:16,020 INFO [RS:2;jenkins-hbase4:34763] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:16,021 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:16,021 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:16,021 INFO [RS:2;jenkins-hbase4:34763] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34763 2023-07-14 04:16:16,021 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37557,1689308152906 2023-07-14 04:16:16,024 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37557,1689308152906] 2023-07-14 04:16:16,024 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37557,1689308152906; numProcessing=3 2023-07-14 04:16:16,025 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34763,1689308149192 2023-07-14 04:16:16,025 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:16,124 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:16,124 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:37557-0x101620b2b57000b, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:16,124 INFO [RS:3;jenkins-hbase4:37557] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37557,1689308152906; zookeeper connection closed. 2023-07-14 04:16:16,125 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2832d2a3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2832d2a3 2023-07-14 04:16:16,127 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37557,1689308152906 already deleted, retry=false 2023-07-14 04:16:16,127 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37557,1689308152906 expired; onlineServers=1 2023-07-14 04:16:16,128 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34763,1689308149192] 2023-07-14 04:16:16,128 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34763,1689308149192; numProcessing=4 2023-07-14 04:16:16,129 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34763,1689308149192 already deleted, retry=false 2023-07-14 04:16:16,129 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34763,1689308149192 expired; onlineServers=0 2023-07-14 04:16:16,129 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34797,1689308146653' ***** 2023-07-14 04:16:16,129 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-14 04:16:16,130 DEBUG [M:0;jenkins-hbase4:34797] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@360acee3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:16,131 INFO [M:0;jenkins-hbase4:34797] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:16,133 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:16,133 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:16,134 INFO [M:0;jenkins-hbase4:34797] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@38aa31da{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-14 04:16:16,134 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:16,138 INFO [M:0;jenkins-hbase4:34797] server.AbstractConnector(383): Stopped ServerConnector@7e43481b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:16,138 INFO [M:0;jenkins-hbase4:34797] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:16,139 INFO [M:0;jenkins-hbase4:34797] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@680fffdc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:16,140 INFO [M:0;jenkins-hbase4:34797] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38bddd36{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:16,140 INFO [M:0;jenkins-hbase4:34797] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34797,1689308146653 2023-07-14 04:16:16,140 INFO [M:0;jenkins-hbase4:34797] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34797,1689308146653; all regions closed. 2023-07-14 04:16:16,140 DEBUG [M:0;jenkins-hbase4:34797] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:16,140 INFO [M:0;jenkins-hbase4:34797] master.HMaster(1491): Stopping master jetty server 2023-07-14 04:16:16,141 INFO [M:0;jenkins-hbase4:34797] server.AbstractConnector(383): Stopped ServerConnector@5812bd08{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:16,142 DEBUG [M:0;jenkins-hbase4:34797] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-14 04:16:16,142 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-14 04:16:16,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308150954] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308150954,5,FailOnTimeoutGroup] 2023-07-14 04:16:16,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308150966] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308150966,5,FailOnTimeoutGroup] 2023-07-14 04:16:16,142 DEBUG [M:0;jenkins-hbase4:34797] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-14 04:16:16,142 INFO [M:0;jenkins-hbase4:34797] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-14 04:16:16,142 INFO [M:0;jenkins-hbase4:34797] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-14 04:16:16,142 INFO [M:0;jenkins-hbase4:34797] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-14 04:16:16,142 DEBUG [M:0;jenkins-hbase4:34797] master.HMaster(1512): Stopping service threads 2023-07-14 04:16:16,143 INFO [M:0;jenkins-hbase4:34797] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-14 04:16:16,143 ERROR [M:0;jenkins-hbase4:34797] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-14 04:16:16,144 INFO [M:0;jenkins-hbase4:34797] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-14 04:16:16,144 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-14 04:16:16,144 DEBUG [M:0;jenkins-hbase4:34797] zookeeper.ZKUtil(398): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-14 04:16:16,144 WARN [M:0;jenkins-hbase4:34797] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-14 04:16:16,144 INFO [M:0;jenkins-hbase4:34797] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-14 04:16:16,144 INFO [M:0;jenkins-hbase4:34797] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-14 04:16:16,144 DEBUG [M:0;jenkins-hbase4:34797] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 04:16:16,144 INFO [M:0;jenkins-hbase4:34797] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:16,144 DEBUG [M:0;jenkins-hbase4:34797] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:16,144 DEBUG [M:0;jenkins-hbase4:34797] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 04:16:16,144 DEBUG [M:0;jenkins-hbase4:34797] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:16,145 INFO [M:0;jenkins-hbase4:34797] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.04 KB heapSize=621.13 KB 2023-07-14 04:16:16,161 INFO [M:0;jenkins-hbase4:34797] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.04 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4ca2cd964e10449f8d0345ddd86a4486 2023-07-14 04:16:16,167 DEBUG [M:0;jenkins-hbase4:34797] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/4ca2cd964e10449f8d0345ddd86a4486 as hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4ca2cd964e10449f8d0345ddd86a4486 2023-07-14 04:16:16,177 INFO [M:0;jenkins-hbase4:34797] regionserver.HStore(1080): Added hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/4ca2cd964e10449f8d0345ddd86a4486, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-14 04:16:16,178 INFO [M:0;jenkins-hbase4:34797] regionserver.HRegion(2948): Finished flush of dataSize ~519.04 KB/531501, heapSize ~621.12 KB/636024, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=1152, compaction requested=false 2023-07-14 04:16:16,180 INFO [M:0;jenkins-hbase4:34797] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:16,180 DEBUG [M:0;jenkins-hbase4:34797] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:16:16,193 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:16,193 INFO [M:0;jenkins-hbase4:34797] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-14 04:16:16,194 INFO [M:0;jenkins-hbase4:34797] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34797 2023-07-14 04:16:16,196 DEBUG [M:0;jenkins-hbase4:34797] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34797,1689308146653 already deleted, retry=false 2023-07-14 04:16:16,468 INFO [M:0;jenkins-hbase4:34797] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34797,1689308146653; zookeeper connection closed. 2023-07-14 04:16:16,468 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:16,468 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): master:34797-0x101620b2b570000, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:16,568 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:16,568 INFO [RS:2;jenkins-hbase4:34763] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34763,1689308149192; zookeeper connection closed. 2023-07-14 04:16:16,568 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34763-0x101620b2b570003, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:16,570 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6f9a2e19] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6f9a2e19 2023-07-14 04:16:16,668 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:16,668 INFO [RS:0;jenkins-hbase4:34609] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34609,1689308148721; zookeeper connection closed. 2023-07-14 04:16:16,668 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): regionserver:34609-0x101620b2b570001, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:16,668 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@652161ec] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@652161ec 2023-07-14 04:16:16,669 INFO [Listener at localhost/46681] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-14 04:16:16,669 WARN [Listener at localhost/46681] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:16,672 INFO [Listener at localhost/46681] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:16,776 WARN [BP-112108073-172.31.14.131-1689308143026 heartbeating to localhost/127.0.0.1:33983] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:16,776 WARN [BP-112108073-172.31.14.131-1689308143026 heartbeating to localhost/127.0.0.1:33983] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-112108073-172.31.14.131-1689308143026 (Datanode Uuid 916a5ea7-e13f-42c6-b6ee-79eaac5e19ab) service to localhost/127.0.0.1:33983 2023-07-14 04:16:16,778 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data5/current/BP-112108073-172.31.14.131-1689308143026] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:16,778 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data6/current/BP-112108073-172.31.14.131-1689308143026] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:16,779 WARN [Listener at localhost/46681] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:16,782 INFO [Listener at localhost/46681] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:16,785 WARN [BP-112108073-172.31.14.131-1689308143026 heartbeating to localhost/127.0.0.1:33983] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:16,785 WARN [BP-112108073-172.31.14.131-1689308143026 heartbeating to localhost/127.0.0.1:33983] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-112108073-172.31.14.131-1689308143026 (Datanode Uuid 0adb0faa-8114-41ff-822f-9596167e2e37) service to localhost/127.0.0.1:33983 2023-07-14 04:16:16,786 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data3/current/BP-112108073-172.31.14.131-1689308143026] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:16,786 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data4/current/BP-112108073-172.31.14.131-1689308143026] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:16,788 WARN [Listener at localhost/46681] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:16,795 INFO [Listener at localhost/46681] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:16,810 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:16:16,810 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 04:16:16,810 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 04:16:16,897 WARN [BP-112108073-172.31.14.131-1689308143026 heartbeating to localhost/127.0.0.1:33983] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:16,898 WARN [BP-112108073-172.31.14.131-1689308143026 heartbeating to localhost/127.0.0.1:33983] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-112108073-172.31.14.131-1689308143026 (Datanode Uuid c5b2c7e4-deea-42ee-99fd-89a518b7f806) service to localhost/127.0.0.1:33983 2023-07-14 04:16:16,898 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data1/current/BP-112108073-172.31.14.131-1689308143026] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:16,899 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/cluster_badfe9da-6d51-be67-1850-63cbc5aca07e/dfs/data/data2/current/BP-112108073-172.31.14.131-1689308143026] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:16,929 INFO [Listener at localhost/46681] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:17,050 INFO [Listener at localhost/46681] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-14 04:16:17,100 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-14 04:16:17,100 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-14 04:16:17,100 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.log.dir so I do NOT create it in target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51 2023-07-14 04:16:17,100 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/aa4c0b08-c9c3-949f-54d7-61a7c084b86b/hadoop.tmp.dir so I do NOT create it in target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51 2023-07-14 04:16:17,101 INFO [Listener at localhost/46681] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad, deleteOnExit=true 2023-07-14 04:16:17,101 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-14 04:16:17,101 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/test.cache.data in system properties and HBase conf 2023-07-14 04:16:17,101 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.tmp.dir in system properties and HBase conf 2023-07-14 04:16:17,101 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir in system properties and HBase conf 2023-07-14 04:16:17,101 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-14 04:16:17,101 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-14 04:16:17,101 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-14 04:16:17,101 DEBUG [Listener at localhost/46681] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-14 04:16:17,102 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-14 04:16:17,102 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-14 04:16:17,102 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-14 04:16:17,102 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 04:16:17,102 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-14 04:16:17,102 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-14 04:16:17,102 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 04:16:17,102 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 04:16:17,103 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-14 04:16:17,103 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/nfs.dump.dir in system properties and HBase conf 2023-07-14 04:16:17,103 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir in system properties and HBase conf 2023-07-14 04:16:17,103 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 04:16:17,103 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-14 04:16:17,103 INFO [Listener at localhost/46681] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-14 04:16:17,107 WARN [Listener at localhost/46681] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 04:16:17,108 WARN [Listener at localhost/46681] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 04:16:17,146 DEBUG [Listener at localhost/46681-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101620b2b57000a, quorum=127.0.0.1:56534, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-14 04:16:17,146 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101620b2b57000a, quorum=127.0.0.1:56534, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-14 04:16:17,156 WARN [Listener at localhost/46681] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:16:17,160 INFO [Listener at localhost/46681] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:16:17,167 INFO [Listener at localhost/46681] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir/Jetty_localhost_39649_hdfs____ygu56n/webapp 2023-07-14 04:16:17,277 INFO [Listener at localhost/46681] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39649 2023-07-14 04:16:17,282 WARN [Listener at localhost/46681] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 04:16:17,282 WARN [Listener at localhost/46681] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 04:16:17,337 WARN [Listener at localhost/42129] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:16:17,351 WARN [Listener at localhost/42129] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:16:17,354 WARN [Listener at localhost/42129] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:16:17,355 INFO [Listener at localhost/42129] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:16:17,359 INFO [Listener at localhost/42129] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir/Jetty_localhost_39199_datanode____xvfpea/webapp 2023-07-14 04:16:17,454 INFO [Listener at localhost/42129] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39199 2023-07-14 04:16:17,462 WARN [Listener at localhost/44329] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:16:17,483 WARN [Listener at localhost/44329] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:16:17,485 WARN [Listener at localhost/44329] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:16:17,486 INFO [Listener at localhost/44329] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:16:17,492 INFO [Listener at localhost/44329] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir/Jetty_localhost_44613_datanode____z0df0f/webapp 2023-07-14 04:16:17,580 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcbe5f5df21578981: Processing first storage report for DS-83bba43d-8687-408b-a33f-c90cad5510ad from datanode c2eda0a8-11fc-4254-b0ac-0616ebfec0f3 2023-07-14 04:16:17,581 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcbe5f5df21578981: from storage DS-83bba43d-8687-408b-a33f-c90cad5510ad node DatanodeRegistration(127.0.0.1:36923, datanodeUuid=c2eda0a8-11fc-4254-b0ac-0616ebfec0f3, infoPort=46465, infoSecurePort=0, ipcPort=44329, storageInfo=lv=-57;cid=testClusterID;nsid=28809319;c=1689308177110), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:17,581 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcbe5f5df21578981: Processing first storage report for DS-8e21130c-4921-410d-872f-b58b00a549c4 from datanode c2eda0a8-11fc-4254-b0ac-0616ebfec0f3 2023-07-14 04:16:17,581 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcbe5f5df21578981: from storage DS-8e21130c-4921-410d-872f-b58b00a549c4 node DatanodeRegistration(127.0.0.1:36923, datanodeUuid=c2eda0a8-11fc-4254-b0ac-0616ebfec0f3, infoPort=46465, infoSecurePort=0, ipcPort=44329, storageInfo=lv=-57;cid=testClusterID;nsid=28809319;c=1689308177110), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:17,625 INFO [Listener at localhost/44329] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44613 2023-07-14 04:16:17,634 WARN [Listener at localhost/46367] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:16:17,662 WARN [Listener at localhost/46367] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:16:17,665 WARN [Listener at localhost/46367] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:16:17,667 INFO [Listener at localhost/46367] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:16:17,672 INFO [Listener at localhost/46367] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir/Jetty_localhost_37633_datanode____.tf80kn/webapp 2023-07-14 04:16:17,756 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x578afacb68ce1b9a: Processing first storage report for DS-624ad994-9968-475e-ab77-2a06a85fe100 from datanode a6b3f32e-14e2-4bce-8741-408e3536106a 2023-07-14 04:16:17,756 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x578afacb68ce1b9a: from storage DS-624ad994-9968-475e-ab77-2a06a85fe100 node DatanodeRegistration(127.0.0.1:45451, datanodeUuid=a6b3f32e-14e2-4bce-8741-408e3536106a, infoPort=35807, infoSecurePort=0, ipcPort=46367, storageInfo=lv=-57;cid=testClusterID;nsid=28809319;c=1689308177110), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:17,756 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x578afacb68ce1b9a: Processing first storage report for DS-3b2531b8-c218-4e95-a3a9-99912f72b6d2 from datanode a6b3f32e-14e2-4bce-8741-408e3536106a 2023-07-14 04:16:17,757 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x578afacb68ce1b9a: from storage DS-3b2531b8-c218-4e95-a3a9-99912f72b6d2 node DatanodeRegistration(127.0.0.1:45451, datanodeUuid=a6b3f32e-14e2-4bce-8741-408e3536106a, infoPort=35807, infoSecurePort=0, ipcPort=46367, storageInfo=lv=-57;cid=testClusterID;nsid=28809319;c=1689308177110), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:17,781 INFO [Listener at localhost/46367] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37633 2023-07-14 04:16:17,788 WARN [Listener at localhost/34751] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:16:17,903 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc2239a76d275e2dc: Processing first storage report for DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2 from datanode 552ffd18-8729-4e3f-89da-2e4a5bf1e3f2 2023-07-14 04:16:17,903 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc2239a76d275e2dc: from storage DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2 node DatanodeRegistration(127.0.0.1:45293, datanodeUuid=552ffd18-8729-4e3f-89da-2e4a5bf1e3f2, infoPort=42629, infoSecurePort=0, ipcPort=34751, storageInfo=lv=-57;cid=testClusterID;nsid=28809319;c=1689308177110), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:17,903 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc2239a76d275e2dc: Processing first storage report for DS-578d5875-06f0-4ef5-a089-90fe830d4a45 from datanode 552ffd18-8729-4e3f-89da-2e4a5bf1e3f2 2023-07-14 04:16:17,903 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc2239a76d275e2dc: from storage DS-578d5875-06f0-4ef5-a089-90fe830d4a45 node DatanodeRegistration(127.0.0.1:45293, datanodeUuid=552ffd18-8729-4e3f-89da-2e4a5bf1e3f2, infoPort=42629, infoSecurePort=0, ipcPort=34751, storageInfo=lv=-57;cid=testClusterID;nsid=28809319;c=1689308177110), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:18,001 DEBUG [Listener at localhost/34751] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51 2023-07-14 04:16:18,003 INFO [Listener at localhost/34751] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/zookeeper_0, clientPort=62077, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-14 04:16:18,005 INFO [Listener at localhost/34751] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62077 2023-07-14 04:16:18,005 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,006 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,025 INFO [Listener at localhost/34751] util.FSUtils(471): Created version file at hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b with version=8 2023-07-14 04:16:18,026 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/hbase-staging 2023-07-14 04:16:18,027 DEBUG [Listener at localhost/34751] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-14 04:16:18,027 DEBUG [Listener at localhost/34751] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-14 04:16:18,027 DEBUG [Listener at localhost/34751] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-14 04:16:18,027 DEBUG [Listener at localhost/34751] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-14 04:16:18,028 INFO [Listener at localhost/34751] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:18,028 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,028 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,028 INFO [Listener at localhost/34751] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:18,028 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,028 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:18,028 INFO [Listener at localhost/34751] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:18,029 INFO [Listener at localhost/34751] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36435 2023-07-14 04:16:18,030 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,031 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,032 INFO [Listener at localhost/34751] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36435 connecting to ZooKeeper ensemble=127.0.0.1:62077 2023-07-14 04:16:18,039 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:364350x0, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:18,040 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36435-0x101620ba9560000 connected 2023-07-14 04:16:18,054 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:18,054 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:18,055 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:18,056 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36435 2023-07-14 04:16:18,056 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36435 2023-07-14 04:16:18,059 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36435 2023-07-14 04:16:18,062 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36435 2023-07-14 04:16:18,062 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36435 2023-07-14 04:16:18,064 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:18,064 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:18,065 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:18,065 INFO [Listener at localhost/34751] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-14 04:16:18,065 INFO [Listener at localhost/34751] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:18,065 INFO [Listener at localhost/34751] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:18,065 INFO [Listener at localhost/34751] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:18,066 INFO [Listener at localhost/34751] http.HttpServer(1146): Jetty bound to port 42001 2023-07-14 04:16:18,066 INFO [Listener at localhost/34751] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:18,071 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,072 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4b153e95{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:18,072 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,072 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@11bfa027{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:18,191 INFO [Listener at localhost/34751] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:18,193 INFO [Listener at localhost/34751] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:18,193 INFO [Listener at localhost/34751] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:18,193 INFO [Listener at localhost/34751] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 04:16:18,194 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,195 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@33228b98{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir/jetty-0_0_0_0-42001-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7063987493606148470/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-14 04:16:18,197 INFO [Listener at localhost/34751] server.AbstractConnector(333): Started ServerConnector@26c901e{HTTP/1.1, (http/1.1)}{0.0.0.0:42001} 2023-07-14 04:16:18,197 INFO [Listener at localhost/34751] server.Server(415): Started @37074ms 2023-07-14 04:16:18,197 INFO [Listener at localhost/34751] master.HMaster(444): hbase.rootdir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b, hbase.cluster.distributed=false 2023-07-14 04:16:18,212 INFO [Listener at localhost/34751] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:18,212 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,212 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,212 INFO [Listener at localhost/34751] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:18,212 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,212 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:18,212 INFO [Listener at localhost/34751] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:18,213 INFO [Listener at localhost/34751] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35847 2023-07-14 04:16:18,214 INFO [Listener at localhost/34751] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:16:18,218 DEBUG [Listener at localhost/34751] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:16:18,219 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,221 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,222 INFO [Listener at localhost/34751] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35847 connecting to ZooKeeper ensemble=127.0.0.1:62077 2023-07-14 04:16:18,227 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:358470x0, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:18,228 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:358470x0, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:18,228 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35847-0x101620ba9560001 connected 2023-07-14 04:16:18,229 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:18,229 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:18,230 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35847 2023-07-14 04:16:18,230 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35847 2023-07-14 04:16:18,231 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35847 2023-07-14 04:16:18,231 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35847 2023-07-14 04:16:18,231 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35847 2023-07-14 04:16:18,233 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:18,234 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:18,234 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:18,234 INFO [Listener at localhost/34751] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:16:18,234 INFO [Listener at localhost/34751] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:18,234 INFO [Listener at localhost/34751] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:18,235 INFO [Listener at localhost/34751] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:18,236 INFO [Listener at localhost/34751] http.HttpServer(1146): Jetty bound to port 37341 2023-07-14 04:16:18,236 INFO [Listener at localhost/34751] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:18,237 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,237 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@72e39cc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:18,238 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,238 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29927ba0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:18,269 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-14 04:16:18,363 INFO [Listener at localhost/34751] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:18,363 INFO [Listener at localhost/34751] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:18,364 INFO [Listener at localhost/34751] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:18,364 INFO [Listener at localhost/34751] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 04:16:18,365 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,365 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6f3181ff{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir/jetty-0_0_0_0-37341-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1335913823040914517/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:18,367 INFO [Listener at localhost/34751] server.AbstractConnector(333): Started ServerConnector@6acb7487{HTTP/1.1, (http/1.1)}{0.0.0.0:37341} 2023-07-14 04:16:18,367 INFO [Listener at localhost/34751] server.Server(415): Started @37244ms 2023-07-14 04:16:18,379 INFO [Listener at localhost/34751] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:18,379 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,379 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,380 INFO [Listener at localhost/34751] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:18,380 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,380 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:18,380 INFO [Listener at localhost/34751] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:18,381 INFO [Listener at localhost/34751] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40445 2023-07-14 04:16:18,381 INFO [Listener at localhost/34751] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:16:18,382 DEBUG [Listener at localhost/34751] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:16:18,383 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,384 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,385 INFO [Listener at localhost/34751] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40445 connecting to ZooKeeper ensemble=127.0.0.1:62077 2023-07-14 04:16:18,389 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:404450x0, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:18,391 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40445-0x101620ba9560002 connected 2023-07-14 04:16:18,391 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:18,392 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:18,392 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:18,396 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40445 2023-07-14 04:16:18,396 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40445 2023-07-14 04:16:18,399 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40445 2023-07-14 04:16:18,401 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40445 2023-07-14 04:16:18,402 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40445 2023-07-14 04:16:18,404 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:18,404 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:18,404 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:18,405 INFO [Listener at localhost/34751] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:16:18,405 INFO [Listener at localhost/34751] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:18,405 INFO [Listener at localhost/34751] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:18,405 INFO [Listener at localhost/34751] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:18,406 INFO [Listener at localhost/34751] http.HttpServer(1146): Jetty bound to port 34567 2023-07-14 04:16:18,406 INFO [Listener at localhost/34751] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:18,408 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,408 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1888b8ca{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:18,408 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,408 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4effafd7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:18,523 INFO [Listener at localhost/34751] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:18,524 INFO [Listener at localhost/34751] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:18,524 INFO [Listener at localhost/34751] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:18,524 INFO [Listener at localhost/34751] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 04:16:18,525 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,526 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@84e05f2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir/jetty-0_0_0_0-34567-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6542327548381761505/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:18,527 INFO [Listener at localhost/34751] server.AbstractConnector(333): Started ServerConnector@1da00a90{HTTP/1.1, (http/1.1)}{0.0.0.0:34567} 2023-07-14 04:16:18,527 INFO [Listener at localhost/34751] server.Server(415): Started @37404ms 2023-07-14 04:16:18,539 INFO [Listener at localhost/34751] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:18,539 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,540 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,540 INFO [Listener at localhost/34751] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:18,540 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:18,540 INFO [Listener at localhost/34751] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:18,540 INFO [Listener at localhost/34751] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:18,541 INFO [Listener at localhost/34751] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34775 2023-07-14 04:16:18,541 INFO [Listener at localhost/34751] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:16:18,543 DEBUG [Listener at localhost/34751] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:16:18,543 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,545 INFO [Listener at localhost/34751] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,546 INFO [Listener at localhost/34751] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34775 connecting to ZooKeeper ensemble=127.0.0.1:62077 2023-07-14 04:16:18,549 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:347750x0, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:18,551 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:347750x0, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:18,551 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34775-0x101620ba9560003 connected 2023-07-14 04:16:18,552 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:18,553 DEBUG [Listener at localhost/34751] zookeeper.ZKUtil(164): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:18,553 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34775 2023-07-14 04:16:18,553 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34775 2023-07-14 04:16:18,556 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34775 2023-07-14 04:16:18,556 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34775 2023-07-14 04:16:18,557 DEBUG [Listener at localhost/34751] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34775 2023-07-14 04:16:18,559 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:18,559 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:18,559 INFO [Listener at localhost/34751] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:18,559 INFO [Listener at localhost/34751] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:16:18,559 INFO [Listener at localhost/34751] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:18,560 INFO [Listener at localhost/34751] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:18,560 INFO [Listener at localhost/34751] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:18,560 INFO [Listener at localhost/34751] http.HttpServer(1146): Jetty bound to port 45927 2023-07-14 04:16:18,560 INFO [Listener at localhost/34751] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:18,562 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,562 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@27875105{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:18,562 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,562 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@173770e5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:18,694 INFO [Listener at localhost/34751] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:18,695 INFO [Listener at localhost/34751] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:18,696 INFO [Listener at localhost/34751] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:18,696 INFO [Listener at localhost/34751] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 04:16:18,701 INFO [Listener at localhost/34751] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:18,702 INFO [Listener at localhost/34751] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@169a71c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/java.io.tmpdir/jetty-0_0_0_0-45927-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4804125793026002/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:18,703 INFO [Listener at localhost/34751] server.AbstractConnector(333): Started ServerConnector@7257b488{HTTP/1.1, (http/1.1)}{0.0.0.0:45927} 2023-07-14 04:16:18,703 INFO [Listener at localhost/34751] server.Server(415): Started @37581ms 2023-07-14 04:16:18,706 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:18,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5a675ffb{HTTP/1.1, (http/1.1)}{0.0.0.0:34401} 2023-07-14 04:16:18,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37586ms 2023-07-14 04:16:18,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:18,711 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 04:16:18,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:18,714 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:18,714 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:18,714 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:18,714 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:18,714 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:18,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 04:16:18,717 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 04:16:18,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36435,1689308178027 from backup master directory 2023-07-14 04:16:18,719 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:18,719 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:18,719 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 04:16:18,719 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:18,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/hbase.id with ID: 742837a8-dfb3-43e5-8e05-217b389fa82f 2023-07-14 04:16:18,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:18,756 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:18,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x193e0eeb to 127.0.0.1:62077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:18,770 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c1389e9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:18,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:18,771 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-14 04:16:18,771 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:18,773 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store-tmp 2023-07-14 04:16:18,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:18,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 04:16:18,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:18,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:18,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 04:16:18,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:18,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:18,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:16:18,784 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/WALs/jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:18,788 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36435%2C1689308178027, suffix=, logDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/WALs/jenkins-hbase4.apache.org,36435,1689308178027, archiveDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/oldWALs, maxLogs=10 2023-07-14 04:16:18,814 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK] 2023-07-14 04:16:18,814 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK] 2023-07-14 04:16:18,814 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK] 2023-07-14 04:16:18,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/WALs/jenkins-hbase4.apache.org,36435,1689308178027/jenkins-hbase4.apache.org%2C36435%2C1689308178027.1689308178789 2023-07-14 04:16:18,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK], DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK], DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK]] 2023-07-14 04:16:18,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:18,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:18,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:18,821 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:18,823 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:18,824 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-14 04:16:18,825 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-14 04:16:18,826 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:18,827 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:18,827 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:18,830 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:18,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:18,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11208531040, jitterRate=0.0438757985830307}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:18,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:16:18,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-14 04:16:18,836 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-14 04:16:18,836 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-14 04:16:18,836 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-14 04:16:18,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-14 04:16:18,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-14 04:16:18,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-14 04:16:18,839 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-14 04:16:18,840 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-14 04:16:18,841 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-14 04:16:18,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-14 04:16:18,841 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-14 04:16:18,844 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:18,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-14 04:16:18,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-14 04:16:18,846 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-14 04:16:18,847 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:18,847 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:18,847 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:18,847 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:18,848 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:18,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36435,1689308178027, sessionid=0x101620ba9560000, setting cluster-up flag (Was=false) 2023-07-14 04:16:18,854 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:18,859 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-14 04:16:18,861 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:18,866 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:18,871 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-14 04:16:18,872 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:18,873 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.hbase-snapshot/.tmp 2023-07-14 04:16:18,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-14 04:16:18,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-14 04:16:18,878 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-14 04:16:18,879 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:16:18,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-14 04:16:18,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-14 04:16:18,880 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-14 04:16:18,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 04:16:18,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 04:16:18,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 04:16:18,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 04:16:18,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:16:18,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:16:18,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:16:18,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:16:18,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-14 04:16:18,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:18,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:18,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:18,906 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689308208906 2023-07-14 04:16:18,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-14 04:16:18,908 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 04:16:18,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-14 04:16:18,908 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-14 04:16:18,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-14 04:16:18,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-14 04:16:18,908 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(951): ClusterId : 742837a8-dfb3-43e5-8e05-217b389fa82f 2023-07-14 04:16:18,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-14 04:16:18,908 INFO [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(951): ClusterId : 742837a8-dfb3-43e5-8e05-217b389fa82f 2023-07-14 04:16:18,910 DEBUG [RS:1;jenkins-hbase4:40445] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:16:18,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-14 04:16:18,910 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(951): ClusterId : 742837a8-dfb3-43e5-8e05-217b389fa82f 2023-07-14 04:16:18,912 DEBUG [RS:0;jenkins-hbase4:35847] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:16:18,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:18,912 DEBUG [RS:2;jenkins-hbase4:34775] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:16:18,912 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:18,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-14 04:16:18,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-14 04:16:18,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-14 04:16:18,914 DEBUG [RS:1;jenkins-hbase4:40445] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:16:18,914 DEBUG [RS:1;jenkins-hbase4:40445] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:16:18,914 DEBUG [RS:0;jenkins-hbase4:35847] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:16:18,914 DEBUG [RS:0;jenkins-hbase4:35847] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:16:18,917 DEBUG [RS:2;jenkins-hbase4:34775] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:16:18,917 DEBUG [RS:2;jenkins-hbase4:34775] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:16:18,917 DEBUG [RS:1;jenkins-hbase4:40445] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:16:18,918 DEBUG [RS:0;jenkins-hbase4:35847] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:16:18,919 DEBUG [RS:2;jenkins-hbase4:34775] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:16:18,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-14 04:16:18,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-14 04:16:18,924 DEBUG [RS:0;jenkins-hbase4:35847] zookeeper.ReadOnlyZKClient(139): Connect 0x42eb7e95 to 127.0.0.1:62077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:18,924 DEBUG [RS:1;jenkins-hbase4:40445] zookeeper.ReadOnlyZKClient(139): Connect 0x1cfa9bf6 to 127.0.0.1:62077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:18,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308178923,5,FailOnTimeoutGroup] 2023-07-14 04:16:18,927 DEBUG [RS:2;jenkins-hbase4:34775] zookeeper.ReadOnlyZKClient(139): Connect 0x3f7751bf to 127.0.0.1:62077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:18,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308178927,5,FailOnTimeoutGroup] 2023-07-14 04:16:18,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:18,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-14 04:16:18,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:18,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:18,943 DEBUG [RS:1;jenkins-hbase4:40445] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@123d7619, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:18,943 DEBUG [RS:2;jenkins-hbase4:34775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31b0b821, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:18,943 DEBUG [RS:0;jenkins-hbase4:35847] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@481315e5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:18,943 DEBUG [RS:2;jenkins-hbase4:34775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@358f3acf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:18,943 DEBUG [RS:0;jenkins-hbase4:35847] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d118283, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:18,944 DEBUG [RS:1;jenkins-hbase4:40445] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53007c22, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:18,954 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40445 2023-07-14 04:16:18,954 DEBUG [RS:0;jenkins-hbase4:35847] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35847 2023-07-14 04:16:18,954 INFO [RS:1;jenkins-hbase4:40445] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:16:18,955 INFO [RS:0;jenkins-hbase4:35847] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:16:18,955 INFO [RS:0;jenkins-hbase4:35847] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:16:18,955 INFO [RS:1;jenkins-hbase4:40445] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:16:18,955 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:34775 2023-07-14 04:16:18,955 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:16:18,955 DEBUG [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:16:18,955 INFO [RS:2;jenkins-hbase4:34775] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:16:18,955 INFO [RS:2;jenkins-hbase4:34775] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:16:18,955 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:16:18,956 INFO [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36435,1689308178027 with isa=jenkins-hbase4.apache.org/172.31.14.131:35847, startcode=1689308178211 2023-07-14 04:16:18,956 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36435,1689308178027 with isa=jenkins-hbase4.apache.org/172.31.14.131:40445, startcode=1689308178379 2023-07-14 04:16:18,956 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36435,1689308178027 with isa=jenkins-hbase4.apache.org/172.31.14.131:34775, startcode=1689308178539 2023-07-14 04:16:18,956 DEBUG [RS:0;jenkins-hbase4:35847] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:16:18,956 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:18,956 DEBUG [RS:2;jenkins-hbase4:34775] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:16:18,956 DEBUG [RS:1;jenkins-hbase4:40445] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:16:18,957 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:18,957 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b 2023-07-14 04:16:18,958 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34433, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:16:18,958 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50539, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:16:18,958 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54443, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:16:18,961 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36435] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:18,961 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:16:18,961 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36435] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:18,961 DEBUG [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b 2023-07-14 04:16:18,961 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36435] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:18,961 DEBUG [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42129 2023-07-14 04:16:18,961 DEBUG [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42001 2023-07-14 04:16:18,962 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b 2023-07-14 04:16:18,962 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42129 2023-07-14 04:16:18,962 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b 2023-07-14 04:16:18,962 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-14 04:16:18,963 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42129 2023-07-14 04:16:18,963 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42001 2023-07-14 04:16:18,963 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42001 2023-07-14 04:16:18,963 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:16:18,963 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-14 04:16:18,964 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:18,968 DEBUG [RS:0;jenkins-hbase4:35847] zookeeper.ZKUtil(162): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:18,968 WARN [RS:0;jenkins-hbase4:35847] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:18,968 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34775,1689308178539] 2023-07-14 04:16:18,968 INFO [RS:0;jenkins-hbase4:35847] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:18,968 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40445,1689308178379] 2023-07-14 04:16:18,969 DEBUG [RS:2;jenkins-hbase4:34775] zookeeper.ZKUtil(162): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:18,969 DEBUG [RS:1;jenkins-hbase4:40445] zookeeper.ZKUtil(162): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:18,969 WARN [RS:2;jenkins-hbase4:34775] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:18,969 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35847,1689308178211] 2023-07-14 04:16:18,969 INFO [RS:2;jenkins-hbase4:34775] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:18,969 DEBUG [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:18,969 WARN [RS:1;jenkins-hbase4:40445] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:18,969 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:18,969 INFO [RS:1;jenkins-hbase4:40445] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:18,969 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:18,980 DEBUG [RS:1;jenkins-hbase4:40445] zookeeper.ZKUtil(162): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:18,981 DEBUG [RS:1;jenkins-hbase4:40445] zookeeper.ZKUtil(162): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:18,981 DEBUG [RS:1;jenkins-hbase4:40445] zookeeper.ZKUtil(162): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:18,983 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:18,983 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:16:18,983 INFO [RS:1;jenkins-hbase4:40445] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:16:18,984 DEBUG [RS:0;jenkins-hbase4:35847] zookeeper.ZKUtil(162): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:18,984 DEBUG [RS:0;jenkins-hbase4:35847] zookeeper.ZKUtil(162): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:18,984 DEBUG [RS:2;jenkins-hbase4:34775] zookeeper.ZKUtil(162): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:18,985 DEBUG [RS:0;jenkins-hbase4:35847] zookeeper.ZKUtil(162): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:18,985 DEBUG [RS:2;jenkins-hbase4:34775] zookeeper.ZKUtil(162): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:18,986 DEBUG [RS:0;jenkins-hbase4:35847] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:16:18,986 INFO [RS:0;jenkins-hbase4:35847] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:16:18,987 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 04:16:18,988 DEBUG [RS:2;jenkins-hbase4:34775] zookeeper.ZKUtil(162): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:18,989 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:16:18,989 INFO [RS:2;jenkins-hbase4:34775] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:16:18,992 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/info 2023-07-14 04:16:18,993 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 04:16:18,993 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:18,993 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 04:16:18,994 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:16:18,995 INFO [RS:1;jenkins-hbase4:40445] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:16:18,995 INFO [RS:0;jenkins-hbase4:35847] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:16:18,995 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 04:16:18,995 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:18,995 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 04:16:18,997 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/table 2023-07-14 04:16:18,997 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 04:16:18,998 INFO [RS:2;jenkins-hbase4:34775] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:16:19,002 INFO [RS:0;jenkins-hbase4:35847] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:16:19,002 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:19,002 INFO [RS:2;jenkins-hbase4:34775] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:16:19,002 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,002 INFO [RS:1;jenkins-hbase4:40445] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:16:19,002 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,002 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,002 INFO [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:16:19,003 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740 2023-07-14 04:16:19,003 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740 2023-07-14 04:16:19,006 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 04:16:19,007 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:16:19,009 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 04:16:19,010 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:16:19,011 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,011 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,011 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,011 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,011 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,011 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,011 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,012 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:19,012 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:2;jenkins-hbase4:34775] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,012 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:19,012 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:19,013 DEBUG [RS:0;jenkins-hbase4:35847] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,013 DEBUG [RS:1;jenkins-hbase4:40445] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:19,014 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:19,015 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9784377440, jitterRate=-0.08875884115695953}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 04:16:19,016 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 04:16:19,016 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 04:16:19,016 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 04:16:19,016 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 04:16:19,016 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 04:16:19,016 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 04:16:19,018 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,018 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,018 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,018 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,019 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,019 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,019 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,019 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,022 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,023 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,023 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,023 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,023 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 04:16:19,023 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 04:16:19,024 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 04:16:19,024 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-14 04:16:19,024 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-14 04:16:19,028 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-14 04:16:19,030 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-14 04:16:19,038 INFO [RS:2;jenkins-hbase4:34775] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:16:19,039 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34775,1689308178539-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,044 INFO [RS:0;jenkins-hbase4:35847] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:16:19,044 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35847,1689308178211-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,046 INFO [RS:1;jenkins-hbase4:40445] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:16:19,046 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40445,1689308178379-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,054 INFO [RS:2;jenkins-hbase4:34775] regionserver.Replication(203): jenkins-hbase4.apache.org,34775,1689308178539 started 2023-07-14 04:16:19,055 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34775,1689308178539, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34775, sessionid=0x101620ba9560003 2023-07-14 04:16:19,055 DEBUG [RS:2;jenkins-hbase4:34775] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:16:19,055 DEBUG [RS:2;jenkins-hbase4:34775] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:19,055 DEBUG [RS:2;jenkins-hbase4:34775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34775,1689308178539' 2023-07-14 04:16:19,055 DEBUG [RS:2;jenkins-hbase4:34775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:16:19,055 DEBUG [RS:2;jenkins-hbase4:34775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:16:19,056 DEBUG [RS:2;jenkins-hbase4:34775] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:16:19,056 DEBUG [RS:2;jenkins-hbase4:34775] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:16:19,056 DEBUG [RS:2;jenkins-hbase4:34775] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:19,056 DEBUG [RS:2;jenkins-hbase4:34775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34775,1689308178539' 2023-07-14 04:16:19,056 DEBUG [RS:2;jenkins-hbase4:34775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:16:19,056 DEBUG [RS:2;jenkins-hbase4:34775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:16:19,057 DEBUG [RS:2;jenkins-hbase4:34775] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:16:19,057 INFO [RS:2;jenkins-hbase4:34775] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-14 04:16:19,060 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,060 INFO [RS:0;jenkins-hbase4:35847] regionserver.Replication(203): jenkins-hbase4.apache.org,35847,1689308178211 started 2023-07-14 04:16:19,060 INFO [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35847,1689308178211, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35847, sessionid=0x101620ba9560001 2023-07-14 04:16:19,060 DEBUG [RS:0;jenkins-hbase4:35847] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:16:19,060 DEBUG [RS:0;jenkins-hbase4:35847] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:19,060 DEBUG [RS:0;jenkins-hbase4:35847] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35847,1689308178211' 2023-07-14 04:16:19,061 DEBUG [RS:0;jenkins-hbase4:35847] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:16:19,061 DEBUG [RS:2;jenkins-hbase4:34775] zookeeper.ZKUtil(398): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-14 04:16:19,061 INFO [RS:2;jenkins-hbase4:34775] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-14 04:16:19,061 DEBUG [RS:0;jenkins-hbase4:35847] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:16:19,061 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,061 DEBUG [RS:0;jenkins-hbase4:35847] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:16:19,061 DEBUG [RS:0;jenkins-hbase4:35847] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:16:19,061 DEBUG [RS:0;jenkins-hbase4:35847] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:19,062 DEBUG [RS:0;jenkins-hbase4:35847] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35847,1689308178211' 2023-07-14 04:16:19,062 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,062 DEBUG [RS:0;jenkins-hbase4:35847] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:16:19,062 INFO [RS:1;jenkins-hbase4:40445] regionserver.Replication(203): jenkins-hbase4.apache.org,40445,1689308178379 started 2023-07-14 04:16:19,062 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40445,1689308178379, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40445, sessionid=0x101620ba9560002 2023-07-14 04:16:19,062 DEBUG [RS:1;jenkins-hbase4:40445] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:16:19,062 DEBUG [RS:1;jenkins-hbase4:40445] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:19,062 DEBUG [RS:1;jenkins-hbase4:40445] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40445,1689308178379' 2023-07-14 04:16:19,062 DEBUG [RS:1;jenkins-hbase4:40445] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:16:19,062 DEBUG [RS:0;jenkins-hbase4:35847] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:16:19,063 DEBUG [RS:1;jenkins-hbase4:40445] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:16:19,063 DEBUG [RS:0;jenkins-hbase4:35847] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:16:19,063 INFO [RS:0;jenkins-hbase4:35847] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-14 04:16:19,063 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,063 DEBUG [RS:1;jenkins-hbase4:40445] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:16:19,063 DEBUG [RS:1;jenkins-hbase4:40445] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:16:19,063 DEBUG [RS:1;jenkins-hbase4:40445] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:19,063 DEBUG [RS:1;jenkins-hbase4:40445] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40445,1689308178379' 2023-07-14 04:16:19,063 DEBUG [RS:0;jenkins-hbase4:35847] zookeeper.ZKUtil(398): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-14 04:16:19,063 DEBUG [RS:1;jenkins-hbase4:40445] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:16:19,063 INFO [RS:0;jenkins-hbase4:35847] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-14 04:16:19,063 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,063 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,064 DEBUG [RS:1;jenkins-hbase4:40445] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:16:19,064 DEBUG [RS:1;jenkins-hbase4:40445] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:16:19,064 INFO [RS:1;jenkins-hbase4:40445] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-14 04:16:19,064 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,064 DEBUG [RS:1;jenkins-hbase4:40445] zookeeper.ZKUtil(398): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-14 04:16:19,064 INFO [RS:1;jenkins-hbase4:40445] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-14 04:16:19,064 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,064 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,165 INFO [RS:0;jenkins-hbase4:35847] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35847%2C1689308178211, suffix=, logDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,35847,1689308178211, archiveDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/oldWALs, maxLogs=32 2023-07-14 04:16:19,165 INFO [RS:2;jenkins-hbase4:34775] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34775%2C1689308178539, suffix=, logDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,34775,1689308178539, archiveDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/oldWALs, maxLogs=32 2023-07-14 04:16:19,166 INFO [RS:1;jenkins-hbase4:40445] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40445%2C1689308178379, suffix=, logDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,40445,1689308178379, archiveDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/oldWALs, maxLogs=32 2023-07-14 04:16:19,180 DEBUG [jenkins-hbase4:36435] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-14 04:16:19,180 DEBUG [jenkins-hbase4:36435] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:19,181 DEBUG [jenkins-hbase4:36435] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:19,181 DEBUG [jenkins-hbase4:36435] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:19,181 DEBUG [jenkins-hbase4:36435] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:19,181 DEBUG [jenkins-hbase4:36435] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:19,182 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40445,1689308178379, state=OPENING 2023-07-14 04:16:19,184 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-14 04:16:19,185 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:19,186 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:16:19,189 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40445,1689308178379}] 2023-07-14 04:16:19,196 WARN [ReadOnlyZKClient-127.0.0.1:62077@0x193e0eeb] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-14 04:16:19,197 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36435,1689308178027] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:19,199 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48746, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:19,200 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40445] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:48746 deadline: 1689308239199, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:19,214 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK] 2023-07-14 04:16:19,216 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK] 2023-07-14 04:16:19,217 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK] 2023-07-14 04:16:19,217 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK] 2023-07-14 04:16:19,218 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK] 2023-07-14 04:16:19,218 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK] 2023-07-14 04:16:19,219 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK] 2023-07-14 04:16:19,223 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK] 2023-07-14 04:16:19,224 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK] 2023-07-14 04:16:19,229 INFO [RS:2;jenkins-hbase4:34775] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,34775,1689308178539/jenkins-hbase4.apache.org%2C34775%2C1689308178539.1689308179174 2023-07-14 04:16:19,229 DEBUG [RS:2;jenkins-hbase4:34775] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK], DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK], DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK]] 2023-07-14 04:16:19,230 INFO [RS:0;jenkins-hbase4:35847] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,35847,1689308178211/jenkins-hbase4.apache.org%2C35847%2C1689308178211.1689308179171 2023-07-14 04:16:19,230 INFO [RS:1;jenkins-hbase4:40445] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,40445,1689308178379/jenkins-hbase4.apache.org%2C40445%2C1689308178379.1689308179174 2023-07-14 04:16:19,230 DEBUG [RS:0;jenkins-hbase4:35847] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK], DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK], DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK]] 2023-07-14 04:16:19,230 DEBUG [RS:1;jenkins-hbase4:40445] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK], DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK], DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK]] 2023-07-14 04:16:19,351 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:19,353 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:16:19,354 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48758, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:16:19,359 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 04:16:19,359 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:19,361 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40445%2C1689308178379.meta, suffix=.meta, logDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,40445,1689308178379, archiveDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/oldWALs, maxLogs=32 2023-07-14 04:16:19,376 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK] 2023-07-14 04:16:19,376 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK] 2023-07-14 04:16:19,376 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK] 2023-07-14 04:16:19,380 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/WALs/jenkins-hbase4.apache.org,40445,1689308178379/jenkins-hbase4.apache.org%2C40445%2C1689308178379.meta.1689308179362.meta 2023-07-14 04:16:19,380 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45293,DS-5c4af4ae-e8f1-4061-b5c4-96368b9b45a2,DISK], DatanodeInfoWithStorage[127.0.0.1:45451,DS-624ad994-9968-475e-ab77-2a06a85fe100,DISK], DatanodeInfoWithStorage[127.0.0.1:36923,DS-83bba43d-8687-408b-a33f-c90cad5510ad,DISK]] 2023-07-14 04:16:19,380 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:19,380 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 04:16:19,380 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 04:16:19,381 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 04:16:19,381 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 04:16:19,381 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:19,381 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 04:16:19,381 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 04:16:19,382 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 04:16:19,383 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/info 2023-07-14 04:16:19,383 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/info 2023-07-14 04:16:19,384 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 04:16:19,384 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:19,385 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 04:16:19,386 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:16:19,386 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:16:19,386 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 04:16:19,387 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:19,387 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 04:16:19,388 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/table 2023-07-14 04:16:19,388 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/table 2023-07-14 04:16:19,388 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 04:16:19,389 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:19,390 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740 2023-07-14 04:16:19,392 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740 2023-07-14 04:16:19,396 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 04:16:19,398 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 04:16:19,399 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11900826080, jitterRate=0.10835079848766327}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 04:16:19,399 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 04:16:19,400 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689308179351 2023-07-14 04:16:19,405 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 04:16:19,406 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 04:16:19,406 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40445,1689308178379, state=OPEN 2023-07-14 04:16:19,407 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 04:16:19,407 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:16:19,409 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-14 04:16:19,409 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40445,1689308178379 in 221 msec 2023-07-14 04:16:19,411 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-14 04:16:19,411 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 385 msec 2023-07-14 04:16:19,412 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 532 msec 2023-07-14 04:16:19,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689308179412, completionTime=-1 2023-07-14 04:16:19,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-14 04:16:19,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-14 04:16:19,416 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-14 04:16:19,416 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689308239416 2023-07-14 04:16:19,416 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689308299416 2023-07-14 04:16:19,416 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-14 04:16:19,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1689308178027-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1689308178027-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1689308178027-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36435, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:19,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-14 04:16:19,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:19,424 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-14 04:16:19,425 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-14 04:16:19,425 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:19,426 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:19,427 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,428 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83 empty. 2023-07-14 04:16:19,428 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,428 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-14 04:16:19,446 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:19,447 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 07dabf013eff03d6b857a06952ed1c83, NAME => 'hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp 2023-07-14 04:16:19,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:19,456 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 07dabf013eff03d6b857a06952ed1c83, disabling compactions & flushes 2023-07-14 04:16:19,457 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:19,457 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:19,457 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. after waiting 0 ms 2023-07-14 04:16:19,457 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:19,457 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:19,457 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 07dabf013eff03d6b857a06952ed1c83: 2023-07-14 04:16:19,459 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:19,460 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308179460"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308179460"}]},"ts":"1689308179460"} 2023-07-14 04:16:19,462 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:19,463 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:19,463 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308179463"}]},"ts":"1689308179463"} 2023-07-14 04:16:19,465 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-14 04:16:19,468 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:19,468 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:19,468 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:19,468 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:19,468 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:19,468 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=07dabf013eff03d6b857a06952ed1c83, ASSIGN}] 2023-07-14 04:16:19,470 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=07dabf013eff03d6b857a06952ed1c83, ASSIGN 2023-07-14 04:16:19,470 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=07dabf013eff03d6b857a06952ed1c83, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40445,1689308178379; forceNewPlan=false, retain=false 2023-07-14 04:16:19,503 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36435,1689308178027] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:19,505 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36435,1689308178027] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-14 04:16:19,509 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:19,510 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:19,512 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:19,513 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931 empty. 2023-07-14 04:16:19,513 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:19,513 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-14 04:16:19,621 INFO [jenkins-hbase4:36435] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:19,622 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=07dabf013eff03d6b857a06952ed1c83, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:19,622 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308179622"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308179622"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308179622"}]},"ts":"1689308179622"} 2023-07-14 04:16:19,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 07dabf013eff03d6b857a06952ed1c83, server=jenkins-hbase4.apache.org,40445,1689308178379}] 2023-07-14 04:16:19,780 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:19,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 07dabf013eff03d6b857a06952ed1c83, NAME => 'hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:19,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:19,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,780 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,782 INFO [StoreOpener-07dabf013eff03d6b857a06952ed1c83-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,783 DEBUG [StoreOpener-07dabf013eff03d6b857a06952ed1c83-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83/info 2023-07-14 04:16:19,783 DEBUG [StoreOpener-07dabf013eff03d6b857a06952ed1c83-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83/info 2023-07-14 04:16:19,783 INFO [StoreOpener-07dabf013eff03d6b857a06952ed1c83-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 07dabf013eff03d6b857a06952ed1c83 columnFamilyName info 2023-07-14 04:16:19,784 INFO [StoreOpener-07dabf013eff03d6b857a06952ed1c83-1] regionserver.HStore(310): Store=07dabf013eff03d6b857a06952ed1c83/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:19,785 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,785 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,788 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:19,789 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:19,790 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 07dabf013eff03d6b857a06952ed1c83; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11454025440, jitterRate=0.06673924624919891}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:19,790 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 07dabf013eff03d6b857a06952ed1c83: 2023-07-14 04:16:19,791 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83., pid=7, masterSystemTime=1689308179776 2023-07-14 04:16:19,793 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:19,793 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:19,794 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=07dabf013eff03d6b857a06952ed1c83, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:19,794 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308179794"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308179794"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308179794"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308179794"}]},"ts":"1689308179794"} 2023-07-14 04:16:19,796 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-14 04:16:19,797 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 07dabf013eff03d6b857a06952ed1c83, server=jenkins-hbase4.apache.org,40445,1689308178379 in 171 msec 2023-07-14 04:16:19,798 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-14 04:16:19,798 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=07dabf013eff03d6b857a06952ed1c83, ASSIGN in 329 msec 2023-07-14 04:16:19,799 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:19,799 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308179799"}]},"ts":"1689308179799"} 2023-07-14 04:16:19,800 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-14 04:16:19,802 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:19,803 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 379 msec 2023-07-14 04:16:19,825 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-14 04:16:19,826 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:19,826 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:19,831 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-14 04:16:19,838 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:19,841 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-14 04:16:19,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-14 04:16:19,846 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-14 04:16:19,846 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-14 04:16:19,937 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:19,938 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => cd988f94c4586f06ff8324167b1e9931, NAME => 'hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp 2023-07-14 04:16:19,948 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:19,948 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing cd988f94c4586f06ff8324167b1e9931, disabling compactions & flushes 2023-07-14 04:16:19,948 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:19,948 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:19,948 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. after waiting 0 ms 2023-07-14 04:16:19,948 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:19,948 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:19,948 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for cd988f94c4586f06ff8324167b1e9931: 2023-07-14 04:16:19,950 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:19,951 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308179951"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308179951"}]},"ts":"1689308179951"} 2023-07-14 04:16:19,952 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:19,953 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:19,953 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308179953"}]},"ts":"1689308179953"} 2023-07-14 04:16:19,954 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-14 04:16:19,957 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:19,957 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:19,957 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:19,957 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:19,957 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:19,957 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=cd988f94c4586f06ff8324167b1e9931, ASSIGN}] 2023-07-14 04:16:19,958 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=cd988f94c4586f06ff8324167b1e9931, ASSIGN 2023-07-14 04:16:19,959 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=cd988f94c4586f06ff8324167b1e9931, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40445,1689308178379; forceNewPlan=false, retain=false 2023-07-14 04:16:20,109 INFO [jenkins-hbase4:36435] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:20,110 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=cd988f94c4586f06ff8324167b1e9931, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:20,111 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308180110"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308180110"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308180110"}]},"ts":"1689308180110"} 2023-07-14 04:16:20,112 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure cd988f94c4586f06ff8324167b1e9931, server=jenkins-hbase4.apache.org,40445,1689308178379}] 2023-07-14 04:16:20,268 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:20,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd988f94c4586f06ff8324167b1e9931, NAME => 'hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:20,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 04:16:20,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. service=MultiRowMutationService 2023-07-14 04:16:20,269 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-14 04:16:20,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:20,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:20,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:20,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:20,270 INFO [StoreOpener-cd988f94c4586f06ff8324167b1e9931-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:20,272 DEBUG [StoreOpener-cd988f94c4586f06ff8324167b1e9931-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931/m 2023-07-14 04:16:20,272 DEBUG [StoreOpener-cd988f94c4586f06ff8324167b1e9931-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931/m 2023-07-14 04:16:20,273 INFO [StoreOpener-cd988f94c4586f06ff8324167b1e9931-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd988f94c4586f06ff8324167b1e9931 columnFamilyName m 2023-07-14 04:16:20,273 INFO [StoreOpener-cd988f94c4586f06ff8324167b1e9931-1] regionserver.HStore(310): Store=cd988f94c4586f06ff8324167b1e9931/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:20,274 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:20,275 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:20,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:20,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:20,281 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd988f94c4586f06ff8324167b1e9931; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7ab03c7b, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:20,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd988f94c4586f06ff8324167b1e9931: 2023-07-14 04:16:20,282 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931., pid=11, masterSystemTime=1689308180264 2023-07-14 04:16:20,283 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:20,283 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:20,284 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=cd988f94c4586f06ff8324167b1e9931, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:20,284 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308180284"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308180284"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308180284"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308180284"}]},"ts":"1689308180284"} 2023-07-14 04:16:20,286 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-14 04:16:20,287 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure cd988f94c4586f06ff8324167b1e9931, server=jenkins-hbase4.apache.org,40445,1689308178379 in 173 msec 2023-07-14 04:16:20,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=6 2023-07-14 04:16:20,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=cd988f94c4586f06ff8324167b1e9931, ASSIGN in 330 msec 2023-07-14 04:16:20,295 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:20,298 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 455 msec 2023-07-14 04:16:20,299 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:20,299 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308180299"}]},"ts":"1689308180299"} 2023-07-14 04:16:20,300 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-14 04:16:20,302 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:20,303 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 798 msec 2023-07-14 04:16:20,309 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-14 04:16:20,309 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-14 04:16:20,310 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-14 04:16:20,313 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-14 04:16:20,313 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.594sec 2023-07-14 04:16:20,314 DEBUG [Listener at localhost/34751] zookeeper.ReadOnlyZKClient(139): Connect 0x06ae0755 to 127.0.0.1:62077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:20,314 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:20,314 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:20,317 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 04:16:20,318 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36435,1689308178027] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-14 04:16:20,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-14 04:16:20,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:20,320 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-14 04:16:20,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-14 04:16:20,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-14 04:16:20,325 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:20,326 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:20,330 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,331 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4 empty. 2023-07-14 04:16:20,332 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,332 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-14 04:16:20,332 DEBUG [Listener at localhost/34751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e234f25, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:20,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-14 04:16:20,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-14 04:16:20,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:20,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:20,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-14 04:16:20,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-14 04:16:20,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1689308178027-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-14 04:16:20,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36435,1689308178027-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-14 04:16:20,347 DEBUG [hconnection-0x456bd6f7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:20,358 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-14 04:16:20,360 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49590, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:20,364 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:20,365 INFO [Listener at localhost/34751] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:20,373 DEBUG [Listener at localhost/34751] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-14 04:16:20,376 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-14 04:16:20,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-14 04:16:20,392 DEBUG [Listener at localhost/34751] zookeeper.ReadOnlyZKClient(139): Connect 0x2b9ac065 to 127.0.0.1:62077 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:20,394 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-14 04:16:20,395 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:20,420 DEBUG [Listener at localhost/34751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@305d778f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:20,421 INFO [Listener at localhost/34751] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62077 2023-07-14 04:16:20,424 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:20,424 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:20,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-14 04:16:20,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101620ba956000a connected 2023-07-14 04:16:20,499 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3bb85c9d3842607f2d540fe21dee77d4, NAME => 'hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp 2023-07-14 04:16:20,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-14 04:16:20,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-14 04:16:20,520 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:20,526 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 94 msec 2023-07-14 04:16:20,537 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:20,537 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 3bb85c9d3842607f2d540fe21dee77d4, disabling compactions & flushes 2023-07-14 04:16:20,537 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:20,537 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:20,537 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. after waiting 0 ms 2023-07-14 04:16:20,537 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:20,537 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:20,537 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 3bb85c9d3842607f2d540fe21dee77d4: 2023-07-14 04:16:20,541 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:20,542 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689308180542"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308180542"}]},"ts":"1689308180542"} 2023-07-14 04:16:20,543 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:20,544 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:20,544 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308180544"}]},"ts":"1689308180544"} 2023-07-14 04:16:20,546 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-14 04:16:20,550 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:20,550 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:20,550 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:20,550 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:20,550 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:20,550 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=3bb85c9d3842607f2d540fe21dee77d4, ASSIGN}] 2023-07-14 04:16:20,552 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=3bb85c9d3842607f2d540fe21dee77d4, ASSIGN 2023-07-14 04:16:20,553 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=3bb85c9d3842607f2d540fe21dee77d4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34775,1689308178539; forceNewPlan=false, retain=false 2023-07-14 04:16:20,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-14 04:16:20,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:20,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-14 04:16:20,625 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:20,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-14 04:16:20,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 04:16:20,627 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:20,627 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 04:16:20,629 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:20,631 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,631 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a empty. 2023-07-14 04:16:20,632 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,632 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-14 04:16:20,655 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:20,659 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 22b6ab0d8589847b3a26021ceaa8b65a, NAME => 'np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp 2023-07-14 04:16:20,664 WARN [IPC Server handler 3 on default port 42129] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-14 04:16:20,664 WARN [IPC Server handler 3 on default port 42129] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-14 04:16:20,664 WARN [IPC Server handler 3 on default port 42129] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-14 04:16:20,673 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:20,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 22b6ab0d8589847b3a26021ceaa8b65a, disabling compactions & flushes 2023-07-14 04:16:20,674 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:20,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:20,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. after waiting 0 ms 2023-07-14 04:16:20,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:20,674 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:20,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 22b6ab0d8589847b3a26021ceaa8b65a: 2023-07-14 04:16:20,676 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:20,677 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308180677"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308180677"}]},"ts":"1689308180677"} 2023-07-14 04:16:20,679 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:20,680 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:20,680 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308180680"}]},"ts":"1689308180680"} 2023-07-14 04:16:20,681 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-14 04:16:20,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:20,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:20,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:20,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:20,684 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:20,685 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=22b6ab0d8589847b3a26021ceaa8b65a, ASSIGN}] 2023-07-14 04:16:20,686 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=22b6ab0d8589847b3a26021ceaa8b65a, ASSIGN 2023-07-14 04:16:20,686 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=22b6ab0d8589847b3a26021ceaa8b65a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35847,1689308178211; forceNewPlan=false, retain=false 2023-07-14 04:16:20,703 INFO [jenkins-hbase4:36435] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-14 04:16:20,705 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=3bb85c9d3842607f2d540fe21dee77d4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:20,705 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=22b6ab0d8589847b3a26021ceaa8b65a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:20,706 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689308180705"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308180705"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308180705"}]},"ts":"1689308180705"} 2023-07-14 04:16:20,706 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308180705"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308180705"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308180705"}]},"ts":"1689308180705"} 2023-07-14 04:16:20,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=14, state=RUNNABLE; OpenRegionProcedure 3bb85c9d3842607f2d540fe21dee77d4, server=jenkins-hbase4.apache.org,34775,1689308178539}] 2023-07-14 04:16:20,708 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 22b6ab0d8589847b3a26021ceaa8b65a, server=jenkins-hbase4.apache.org,35847,1689308178211}] 2023-07-14 04:16:20,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 04:16:20,861 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:20,861 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:20,861 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:16:20,861 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:16:20,863 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38030, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:16:20,863 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51344, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:16:20,868 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:20,868 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 22b6ab0d8589847b3a26021ceaa8b65a, NAME => 'np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3bb85c9d3842607f2d540fe21dee77d4, NAME => 'hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,870 INFO [StoreOpener-22b6ab0d8589847b3a26021ceaa8b65a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,870 INFO [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,871 DEBUG [StoreOpener-22b6ab0d8589847b3a26021ceaa8b65a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a/fam1 2023-07-14 04:16:20,871 DEBUG [StoreOpener-22b6ab0d8589847b3a26021ceaa8b65a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a/fam1 2023-07-14 04:16:20,871 DEBUG [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4/q 2023-07-14 04:16:20,871 DEBUG [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4/q 2023-07-14 04:16:20,872 INFO [StoreOpener-22b6ab0d8589847b3a26021ceaa8b65a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 22b6ab0d8589847b3a26021ceaa8b65a columnFamilyName fam1 2023-07-14 04:16:20,872 INFO [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3bb85c9d3842607f2d540fe21dee77d4 columnFamilyName q 2023-07-14 04:16:20,872 INFO [StoreOpener-22b6ab0d8589847b3a26021ceaa8b65a-1] regionserver.HStore(310): Store=22b6ab0d8589847b3a26021ceaa8b65a/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:20,872 INFO [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] regionserver.HStore(310): Store=3bb85c9d3842607f2d540fe21dee77d4/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:20,872 INFO [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,874 DEBUG [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4/u 2023-07-14 04:16:20,874 DEBUG [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4/u 2023-07-14 04:16:20,874 INFO [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3bb85c9d3842607f2d540fe21dee77d4 columnFamilyName u 2023-07-14 04:16:20,874 INFO [StoreOpener-3bb85c9d3842607f2d540fe21dee77d4-1] regionserver.HStore(310): Store=3bb85c9d3842607f2d540fe21dee77d4/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:20,875 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,876 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,876 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:20,879 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-14 04:16:20,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:20,880 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 22b6ab0d8589847b3a26021ceaa8b65a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11514939840, jitterRate=0.07241234183311462}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:20,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 22b6ab0d8589847b3a26021ceaa8b65a: 2023-07-14 04:16:20,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:20,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a., pid=18, masterSystemTime=1689308180861 2023-07-14 04:16:20,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:20,887 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=22b6ab0d8589847b3a26021ceaa8b65a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:20,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:20,891 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308180887"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308180887"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308180887"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308180887"}]},"ts":"1689308180887"} 2023-07-14 04:16:20,895 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:20,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-07-14 04:16:20,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 22b6ab0d8589847b3a26021ceaa8b65a, server=jenkins-hbase4.apache.org,35847,1689308178211 in 184 msec 2023-07-14 04:16:20,896 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3bb85c9d3842607f2d540fe21dee77d4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10172751360, jitterRate=-0.052588701248168945}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-14 04:16:20,896 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3bb85c9d3842607f2d540fe21dee77d4: 2023-07-14 04:16:20,897 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4., pid=17, masterSystemTime=1689308180861 2023-07-14 04:16:20,899 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-14 04:16:20,899 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=22b6ab0d8589847b3a26021ceaa8b65a, ASSIGN in 210 msec 2023-07-14 04:16:20,901 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:20,901 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308180901"}]},"ts":"1689308180901"} 2023-07-14 04:16:20,903 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-14 04:16:20,904 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:20,904 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:20,905 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=3bb85c9d3842607f2d540fe21dee77d4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:20,905 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689308180905"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308180905"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308180905"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308180905"}]},"ts":"1689308180905"} 2023-07-14 04:16:20,907 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:20,910 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=14 2023-07-14 04:16:20,910 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=14, state=SUCCESS; OpenRegionProcedure 3bb85c9d3842607f2d540fe21dee77d4, server=jenkins-hbase4.apache.org,34775,1689308178539 in 199 msec 2023-07-14 04:16:20,910 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 286 msec 2023-07-14 04:16:20,914 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-14 04:16:20,914 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=3bb85c9d3842607f2d540fe21dee77d4, ASSIGN in 360 msec 2023-07-14 04:16:20,915 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:20,915 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308180915"}]},"ts":"1689308180915"} 2023-07-14 04:16:20,916 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-14 04:16:20,918 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:20,919 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 599 msec 2023-07-14 04:16:20,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-14 04:16:20,928 INFO [Listener at localhost/34751] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-14 04:16:20,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:20,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-14 04:16:20,932 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:20,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-14 04:16:20,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-14 04:16:20,950 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:20,951 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51354, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:20,957 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=27 msec 2023-07-14 04:16:21,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-14 04:16:21,038 INFO [Listener at localhost/34751] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-14 04:16:21,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:21,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:21,041 INFO [Listener at localhost/34751] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-14 04:16:21,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-14 04:16:21,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-14 04:16:21,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 04:16:21,045 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308181045"}]},"ts":"1689308181045"} 2023-07-14 04:16:21,047 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-14 04:16:21,048 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-14 04:16:21,049 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=22b6ab0d8589847b3a26021ceaa8b65a, UNASSIGN}] 2023-07-14 04:16:21,050 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=22b6ab0d8589847b3a26021ceaa8b65a, UNASSIGN 2023-07-14 04:16:21,050 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=22b6ab0d8589847b3a26021ceaa8b65a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:21,050 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308181050"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308181050"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308181050"}]},"ts":"1689308181050"} 2023-07-14 04:16:21,051 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 22b6ab0d8589847b3a26021ceaa8b65a, server=jenkins-hbase4.apache.org,35847,1689308178211}] 2023-07-14 04:16:21,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 04:16:21,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:21,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 22b6ab0d8589847b3a26021ceaa8b65a, disabling compactions & flushes 2023-07-14 04:16:21,205 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:21,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:21,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. after waiting 0 ms 2023-07-14 04:16:21,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:21,209 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:21,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a. 2023-07-14 04:16:21,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 22b6ab0d8589847b3a26021ceaa8b65a: 2023-07-14 04:16:21,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:21,213 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=22b6ab0d8589847b3a26021ceaa8b65a, regionState=CLOSED 2023-07-14 04:16:21,213 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308181213"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308181213"}]},"ts":"1689308181213"} 2023-07-14 04:16:21,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-14 04:16:21,217 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 22b6ab0d8589847b3a26021ceaa8b65a, server=jenkins-hbase4.apache.org,35847,1689308178211 in 165 msec 2023-07-14 04:16:21,218 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-14 04:16:21,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=22b6ab0d8589847b3a26021ceaa8b65a, UNASSIGN in 168 msec 2023-07-14 04:16:21,220 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308181220"}]},"ts":"1689308181220"} 2023-07-14 04:16:21,221 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-14 04:16:21,222 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-14 04:16:21,224 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 181 msec 2023-07-14 04:16:21,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 04:16:21,347 INFO [Listener at localhost/34751] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-14 04:16:21,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-14 04:16:21,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-14 04:16:21,354 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 04:16:21,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-14 04:16:21,354 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 04:16:21,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:21,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 04:16:21,358 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:21,359 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a/fam1, FileablePath, hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a/recovered.edits] 2023-07-14 04:16:21,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-14 04:16:21,364 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a/recovered.edits/4.seqid to hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/archive/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a/recovered.edits/4.seqid 2023-07-14 04:16:21,364 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/.tmp/data/np1/table1/22b6ab0d8589847b3a26021ceaa8b65a 2023-07-14 04:16:21,364 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-14 04:16:21,367 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 04:16:21,368 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-14 04:16:21,369 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-14 04:16:21,370 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 04:16:21,370 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-14 04:16:21,371 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308181370"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:21,373 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 04:16:21,373 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 22b6ab0d8589847b3a26021ceaa8b65a, NAME => 'np1:table1,,1689308180621.22b6ab0d8589847b3a26021ceaa8b65a.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 04:16:21,373 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-14 04:16:21,373 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689308181373"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:21,374 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-14 04:16:21,377 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-14 04:16:21,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 29 msec 2023-07-14 04:16:21,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-14 04:16:21,461 INFO [Listener at localhost/34751] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-14 04:16:21,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-14 04:16:21,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-14 04:16:21,474 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 04:16:21,477 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 04:16:21,479 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 04:16:21,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-14 04:16:21,480 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-14 04:16:21,480 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:21,480 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 04:16:21,482 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-14 04:16:21,483 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 16 msec 2023-07-14 04:16:21,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36435] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-14 04:16:21,580 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-14 04:16:21,580 INFO [Listener at localhost/34751] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-14 04:16:21,581 DEBUG [Listener at localhost/34751] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x06ae0755 to 127.0.0.1:62077 2023-07-14 04:16:21,581 DEBUG [Listener at localhost/34751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,581 DEBUG [Listener at localhost/34751] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-14 04:16:21,581 DEBUG [Listener at localhost/34751] util.JVMClusterUtil(257): Found active master hash=1513217814, stopped=false 2023-07-14 04:16:21,581 DEBUG [Listener at localhost/34751] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 04:16:21,581 DEBUG [Listener at localhost/34751] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 04:16:21,581 DEBUG [Listener at localhost/34751] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-14 04:16:21,581 INFO [Listener at localhost/34751] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:21,583 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:21,583 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:21,583 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:21,583 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:21,583 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:21,583 INFO [Listener at localhost/34751] procedure2.ProcedureExecutor(629): Stopping 2023-07-14 04:16:21,585 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:21,585 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:21,585 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:21,585 DEBUG [Listener at localhost/34751] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x193e0eeb to 127.0.0.1:62077 2023-07-14 04:16:21,586 DEBUG [Listener at localhost/34751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,586 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:21,586 INFO [Listener at localhost/34751] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35847,1689308178211' ***** 2023-07-14 04:16:21,586 INFO [Listener at localhost/34751] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:21,586 INFO [Listener at localhost/34751] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40445,1689308178379' ***** 2023-07-14 04:16:21,586 INFO [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:21,586 INFO [Listener at localhost/34751] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:21,586 INFO [Listener at localhost/34751] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34775,1689308178539' ***** 2023-07-14 04:16:21,587 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:21,588 INFO [Listener at localhost/34751] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:21,591 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:21,602 INFO [RS:0;jenkins-hbase4:35847] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6f3181ff{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:21,602 INFO [RS:2;jenkins-hbase4:34775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@169a71c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:21,602 INFO [RS:1;jenkins-hbase4:40445] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@84e05f2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:21,602 INFO [RS:2;jenkins-hbase4:34775] server.AbstractConnector(383): Stopped ServerConnector@7257b488{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:21,602 INFO [RS:1;jenkins-hbase4:40445] server.AbstractConnector(383): Stopped ServerConnector@1da00a90{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:21,602 INFO [RS:0;jenkins-hbase4:35847] server.AbstractConnector(383): Stopped ServerConnector@6acb7487{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:21,602 INFO [RS:1;jenkins-hbase4:40445] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:21,602 INFO [RS:2;jenkins-hbase4:34775] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:21,603 INFO [RS:1;jenkins-hbase4:40445] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4effafd7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:21,602 INFO [RS:0;jenkins-hbase4:35847] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:21,605 INFO [RS:1;jenkins-hbase4:40445] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1888b8ca{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:21,605 INFO [RS:2;jenkins-hbase4:34775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@173770e5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:21,605 INFO [RS:0;jenkins-hbase4:35847] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29927ba0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:21,605 INFO [RS:2;jenkins-hbase4:34775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@27875105{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:21,606 INFO [RS:0;jenkins-hbase4:35847] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@72e39cc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:21,606 INFO [RS:1;jenkins-hbase4:40445] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:21,606 INFO [RS:2;jenkins-hbase4:34775] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:21,606 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:21,606 INFO [RS:2;jenkins-hbase4:34775] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:21,607 INFO [RS:2;jenkins-hbase4:34775] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:21,608 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(3305): Received CLOSE for 3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:21,608 INFO [RS:0;jenkins-hbase4:35847] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:21,606 INFO [RS:1;jenkins-hbase4:40445] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:21,608 INFO [RS:0;jenkins-hbase4:35847] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:21,608 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:21,608 INFO [RS:0;jenkins-hbase4:35847] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:21,608 INFO [RS:1;jenkins-hbase4:40445] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:21,608 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:21,608 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(3305): Received CLOSE for 07dabf013eff03d6b857a06952ed1c83 2023-07-14 04:16:21,608 INFO [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:21,608 DEBUG [RS:0;jenkins-hbase4:35847] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x42eb7e95 to 127.0.0.1:62077 2023-07-14 04:16:21,608 DEBUG [RS:0;jenkins-hbase4:35847] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,608 INFO [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35847,1689308178211; all regions closed. 2023-07-14 04:16:21,608 DEBUG [RS:0;jenkins-hbase4:35847] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-14 04:16:21,608 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:21,608 DEBUG [RS:2;jenkins-hbase4:34775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3f7751bf to 127.0.0.1:62077 2023-07-14 04:16:21,609 DEBUG [RS:2;jenkins-hbase4:34775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,609 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-14 04:16:21,609 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1478): Online Regions={3bb85c9d3842607f2d540fe21dee77d4=hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4.} 2023-07-14 04:16:21,609 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(3305): Received CLOSE for cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:21,609 DEBUG [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1504): Waiting on 3bb85c9d3842607f2d540fe21dee77d4 2023-07-14 04:16:21,609 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:21,609 DEBUG [RS:1;jenkins-hbase4:40445] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1cfa9bf6 to 127.0.0.1:62077 2023-07-14 04:16:21,609 DEBUG [RS:1;jenkins-hbase4:40445] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,609 INFO [RS:1;jenkins-hbase4:40445] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:21,609 INFO [RS:1;jenkins-hbase4:40445] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:21,609 INFO [RS:1;jenkins-hbase4:40445] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:21,609 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-14 04:16:21,609 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-14 04:16:21,609 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 07dabf013eff03d6b857a06952ed1c83=hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83., cd988f94c4586f06ff8324167b1e9931=hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931.} 2023-07-14 04:16:21,609 DEBUG [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1504): Waiting on 07dabf013eff03d6b857a06952ed1c83, 1588230740, cd988f94c4586f06ff8324167b1e9931 2023-07-14 04:16:21,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3bb85c9d3842607f2d540fe21dee77d4, disabling compactions & flushes 2023-07-14 04:16:21,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:21,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:21,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. after waiting 0 ms 2023-07-14 04:16:21,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:21,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 07dabf013eff03d6b857a06952ed1c83, disabling compactions & flushes 2023-07-14 04:16:21,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:21,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:21,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. after waiting 0 ms 2023-07-14 04:16:21,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:21,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 07dabf013eff03d6b857a06952ed1c83 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-14 04:16:21,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/quota/3bb85c9d3842607f2d540fe21dee77d4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:21,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:21,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3bb85c9d3842607f2d540fe21dee77d4: 2023-07-14 04:16:21,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689308180319.3bb85c9d3842607f2d540fe21dee77d4. 2023-07-14 04:16:21,617 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 04:16:21,621 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 04:16:21,621 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 04:16:21,621 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 04:16:21,621 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 04:16:21,622 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-14 04:16:21,625 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:21,626 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:21,627 DEBUG [RS:0;jenkins-hbase4:35847] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/oldWALs 2023-07-14 04:16:21,627 INFO [RS:0;jenkins-hbase4:35847] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35847%2C1689308178211:(num 1689308179171) 2023-07-14 04:16:21,627 DEBUG [RS:0;jenkins-hbase4:35847] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,627 INFO [RS:0;jenkins-hbase4:35847] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:21,628 INFO [RS:0;jenkins-hbase4:35847] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:21,628 INFO [RS:0;jenkins-hbase4:35847] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:21,628 INFO [RS:0;jenkins-hbase4:35847] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:21,628 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:21,628 INFO [RS:0;jenkins-hbase4:35847] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:21,629 INFO [RS:0;jenkins-hbase4:35847] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35847 2023-07-14 04:16:21,635 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:21,638 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:21,638 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:21,638 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:21,638 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:21,638 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:21,638 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35847,1689308178211 2023-07-14 04:16:21,639 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:21,640 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35847,1689308178211] 2023-07-14 04:16:21,640 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35847,1689308178211; numProcessing=1 2023-07-14 04:16:21,641 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35847,1689308178211 already deleted, retry=false 2023-07-14 04:16:21,641 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35847,1689308178211 expired; onlineServers=2 2023-07-14 04:16:21,649 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/.tmp/info/5a5829ac20304833b33f32a3de23a164 2023-07-14 04:16:21,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83/.tmp/info/23865d71de09414493a38cb14955e259 2023-07-14 04:16:21,657 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5a5829ac20304833b33f32a3de23a164 2023-07-14 04:16:21,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 23865d71de09414493a38cb14955e259 2023-07-14 04:16:21,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83/.tmp/info/23865d71de09414493a38cb14955e259 as hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83/info/23865d71de09414493a38cb14955e259 2023-07-14 04:16:21,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 23865d71de09414493a38cb14955e259 2023-07-14 04:16:21,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83/info/23865d71de09414493a38cb14955e259, entries=3, sequenceid=8, filesize=5.0 K 2023-07-14 04:16:21,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 07dabf013eff03d6b857a06952ed1c83 in 49ms, sequenceid=8, compaction requested=false 2023-07-14 04:16:21,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-14 04:16:21,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/namespace/07dabf013eff03d6b857a06952ed1c83/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-14 04:16:21,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:21,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 07dabf013eff03d6b857a06952ed1c83: 2023-07-14 04:16:21,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689308179423.07dabf013eff03d6b857a06952ed1c83. 2023-07-14 04:16:21,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd988f94c4586f06ff8324167b1e9931, disabling compactions & flushes 2023-07-14 04:16:21,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:21,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:21,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. after waiting 0 ms 2023-07-14 04:16:21,677 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:21,677 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cd988f94c4586f06ff8324167b1e9931 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-14 04:16:21,680 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/.tmp/rep_barrier/cfe9cb92a80b4f568591f30c836a2d9b 2023-07-14 04:16:21,688 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cfe9cb92a80b4f568591f30c836a2d9b 2023-07-14 04:16:21,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931/.tmp/m/0eab854bcb5a43abb00cc9e5eb5ac06b 2023-07-14 04:16:21,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931/.tmp/m/0eab854bcb5a43abb00cc9e5eb5ac06b as hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931/m/0eab854bcb5a43abb00cc9e5eb5ac06b 2023-07-14 04:16:21,703 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/.tmp/table/34c4495165fe4c398378fc173bc9371a 2023-07-14 04:16:21,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931/m/0eab854bcb5a43abb00cc9e5eb5ac06b, entries=1, sequenceid=7, filesize=4.9 K 2023-07-14 04:16:21,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for cd988f94c4586f06ff8324167b1e9931 in 29ms, sequenceid=7, compaction requested=false 2023-07-14 04:16:21,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-14 04:16:21,712 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34c4495165fe4c398378fc173bc9371a 2023-07-14 04:16:21,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/rsgroup/cd988f94c4586f06ff8324167b1e9931/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-14 04:16:21,713 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/.tmp/info/5a5829ac20304833b33f32a3de23a164 as hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/info/5a5829ac20304833b33f32a3de23a164 2023-07-14 04:16:21,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:16:21,713 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:21,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd988f94c4586f06ff8324167b1e9931: 2023-07-14 04:16:21,713 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689308179503.cd988f94c4586f06ff8324167b1e9931. 2023-07-14 04:16:21,719 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5a5829ac20304833b33f32a3de23a164 2023-07-14 04:16:21,719 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/info/5a5829ac20304833b33f32a3de23a164, entries=32, sequenceid=31, filesize=8.5 K 2023-07-14 04:16:21,720 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/.tmp/rep_barrier/cfe9cb92a80b4f568591f30c836a2d9b as hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/rep_barrier/cfe9cb92a80b4f568591f30c836a2d9b 2023-07-14 04:16:21,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cfe9cb92a80b4f568591f30c836a2d9b 2023-07-14 04:16:21,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/rep_barrier/cfe9cb92a80b4f568591f30c836a2d9b, entries=1, sequenceid=31, filesize=4.9 K 2023-07-14 04:16:21,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/.tmp/table/34c4495165fe4c398378fc173bc9371a as hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/table/34c4495165fe4c398378fc173bc9371a 2023-07-14 04:16:21,735 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34c4495165fe4c398378fc173bc9371a 2023-07-14 04:16:21,736 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/table/34c4495165fe4c398378fc173bc9371a, entries=8, sequenceid=31, filesize=5.2 K 2023-07-14 04:16:21,736 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 115ms, sequenceid=31, compaction requested=false 2023-07-14 04:16:21,737 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-14 04:16:21,755 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-14 04:16:21,755 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:16:21,756 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 04:16:21,756 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 04:16:21,756 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-14 04:16:21,784 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:21,784 INFO [RS:0;jenkins-hbase4:35847] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35847,1689308178211; zookeeper connection closed. 2023-07-14 04:16:21,784 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:35847-0x101620ba9560001, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:21,785 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@748af768] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@748af768 2023-07-14 04:16:21,809 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34775,1689308178539; all regions closed. 2023-07-14 04:16:21,809 DEBUG [RS:2;jenkins-hbase4:34775] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-14 04:16:21,809 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40445,1689308178379; all regions closed. 2023-07-14 04:16:21,810 DEBUG [RS:1;jenkins-hbase4:40445] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-14 04:16:21,821 DEBUG [RS:1;jenkins-hbase4:40445] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/oldWALs 2023-07-14 04:16:21,821 INFO [RS:1;jenkins-hbase4:40445] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40445%2C1689308178379.meta:.meta(num 1689308179362) 2023-07-14 04:16:21,822 DEBUG [RS:2;jenkins-hbase4:34775] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/oldWALs 2023-07-14 04:16:21,822 INFO [RS:2;jenkins-hbase4:34775] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34775%2C1689308178539:(num 1689308179174) 2023-07-14 04:16:21,822 DEBUG [RS:2;jenkins-hbase4:34775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,822 INFO [RS:2;jenkins-hbase4:34775] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:21,823 INFO [RS:2;jenkins-hbase4:34775] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:21,823 INFO [RS:2;jenkins-hbase4:34775] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:21,823 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:21,823 INFO [RS:2;jenkins-hbase4:34775] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:21,823 INFO [RS:2;jenkins-hbase4:34775] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:21,824 INFO [RS:2;jenkins-hbase4:34775] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34775 2023-07-14 04:16:21,827 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:21,827 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:21,827 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34775,1689308178539 2023-07-14 04:16:21,828 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34775,1689308178539] 2023-07-14 04:16:21,828 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34775,1689308178539; numProcessing=2 2023-07-14 04:16:21,831 DEBUG [RS:1;jenkins-hbase4:40445] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/oldWALs 2023-07-14 04:16:21,831 INFO [RS:1;jenkins-hbase4:40445] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40445%2C1689308178379:(num 1689308179174) 2023-07-14 04:16:21,831 DEBUG [RS:1;jenkins-hbase4:40445] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,831 INFO [RS:1;jenkins-hbase4:40445] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:21,831 INFO [RS:1;jenkins-hbase4:40445] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:21,831 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:21,832 INFO [RS:1;jenkins-hbase4:40445] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40445 2023-07-14 04:16:21,834 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34775,1689308178539 already deleted, retry=false 2023-07-14 04:16:21,834 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34775,1689308178539 expired; onlineServers=1 2023-07-14 04:16:21,836 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40445,1689308178379 2023-07-14 04:16:21,836 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:21,837 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40445,1689308178379] 2023-07-14 04:16:21,837 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40445,1689308178379; numProcessing=3 2023-07-14 04:16:21,838 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40445,1689308178379 already deleted, retry=false 2023-07-14 04:16:21,838 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40445,1689308178379 expired; onlineServers=0 2023-07-14 04:16:21,838 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36435,1689308178027' ***** 2023-07-14 04:16:21,838 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-14 04:16:21,839 DEBUG [M:0;jenkins-hbase4:36435] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e0e5088, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:21,839 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:21,840 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:21,840 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:21,841 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:21,841 INFO [M:0;jenkins-hbase4:36435] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@33228b98{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-14 04:16:21,841 INFO [M:0;jenkins-hbase4:36435] server.AbstractConnector(383): Stopped ServerConnector@26c901e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:21,841 INFO [M:0;jenkins-hbase4:36435] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:21,841 INFO [M:0;jenkins-hbase4:36435] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@11bfa027{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:21,841 INFO [M:0;jenkins-hbase4:36435] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4b153e95{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:21,842 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36435,1689308178027 2023-07-14 04:16:21,842 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36435,1689308178027; all regions closed. 2023-07-14 04:16:21,842 DEBUG [M:0;jenkins-hbase4:36435] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:21,842 INFO [M:0;jenkins-hbase4:36435] master.HMaster(1491): Stopping master jetty server 2023-07-14 04:16:21,843 INFO [M:0;jenkins-hbase4:36435] server.AbstractConnector(383): Stopped ServerConnector@5a675ffb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:21,843 DEBUG [M:0;jenkins-hbase4:36435] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-14 04:16:21,843 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-14 04:16:21,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308178927] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308178927,5,FailOnTimeoutGroup] 2023-07-14 04:16:21,843 DEBUG [M:0;jenkins-hbase4:36435] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-14 04:16:21,843 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308178923] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308178923,5,FailOnTimeoutGroup] 2023-07-14 04:16:21,844 INFO [M:0;jenkins-hbase4:36435] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-14 04:16:21,844 INFO [M:0;jenkins-hbase4:36435] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-14 04:16:21,844 INFO [M:0;jenkins-hbase4:36435] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:21,844 DEBUG [M:0;jenkins-hbase4:36435] master.HMaster(1512): Stopping service threads 2023-07-14 04:16:21,844 INFO [M:0;jenkins-hbase4:36435] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-14 04:16:21,845 ERROR [M:0;jenkins-hbase4:36435] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-14 04:16:21,845 INFO [M:0;jenkins-hbase4:36435] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-14 04:16:21,845 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-14 04:16:21,845 DEBUG [M:0;jenkins-hbase4:36435] zookeeper.ZKUtil(398): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-14 04:16:21,845 WARN [M:0;jenkins-hbase4:36435] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-14 04:16:21,845 INFO [M:0;jenkins-hbase4:36435] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-14 04:16:21,846 INFO [M:0;jenkins-hbase4:36435] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-14 04:16:21,846 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 04:16:21,846 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:21,846 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:21,846 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 04:16:21,846 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:21,846 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.10 KB 2023-07-14 04:16:21,859 INFO [M:0;jenkins-hbase4:36435] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/149b96827ca6469c8dac98d01fc6ac40 2023-07-14 04:16:21,865 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/149b96827ca6469c8dac98d01fc6ac40 as hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/149b96827ca6469c8dac98d01fc6ac40 2023-07-14 04:16:21,869 INFO [M:0;jenkins-hbase4:36435] regionserver.HStore(1080): Added hdfs://localhost:42129/user/jenkins/test-data/1f566899-ab9e-5058-ee2a-138351b2036b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/149b96827ca6469c8dac98d01fc6ac40, entries=24, sequenceid=194, filesize=12.4 K 2023-07-14 04:16:21,870 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95179, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=194, compaction requested=false 2023-07-14 04:16:21,872 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:21,872 DEBUG [M:0;jenkins-hbase4:36435] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:16:21,875 INFO [M:0;jenkins-hbase4:36435] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-14 04:16:21,875 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:21,876 INFO [M:0;jenkins-hbase4:36435] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36435 2023-07-14 04:16:21,878 DEBUG [M:0;jenkins-hbase4:36435] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36435,1689308178027 already deleted, retry=false 2023-07-14 04:16:22,285 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:22,285 INFO [M:0;jenkins-hbase4:36435] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36435,1689308178027; zookeeper connection closed. 2023-07-14 04:16:22,286 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): master:36435-0x101620ba9560000, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:22,386 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:22,386 INFO [RS:1;jenkins-hbase4:40445] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40445,1689308178379; zookeeper connection closed. 2023-07-14 04:16:22,386 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:40445-0x101620ba9560002, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:22,386 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@70ff60fc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@70ff60fc 2023-07-14 04:16:22,486 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:22,486 INFO [RS:2;jenkins-hbase4:34775] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34775,1689308178539; zookeeper connection closed. 2023-07-14 04:16:22,486 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): regionserver:34775-0x101620ba9560003, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:22,486 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@16bb5b32] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@16bb5b32 2023-07-14 04:16:22,487 INFO [Listener at localhost/34751] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-14 04:16:22,487 WARN [Listener at localhost/34751] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:22,490 INFO [Listener at localhost/34751] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:22,595 WARN [BP-1211094465-172.31.14.131-1689308177110 heartbeating to localhost/127.0.0.1:42129] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:22,595 WARN [BP-1211094465-172.31.14.131-1689308177110 heartbeating to localhost/127.0.0.1:42129] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1211094465-172.31.14.131-1689308177110 (Datanode Uuid 552ffd18-8729-4e3f-89da-2e4a5bf1e3f2) service to localhost/127.0.0.1:42129 2023-07-14 04:16:22,596 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/dfs/data/data5/current/BP-1211094465-172.31.14.131-1689308177110] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:22,596 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/dfs/data/data6/current/BP-1211094465-172.31.14.131-1689308177110] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:22,598 WARN [Listener at localhost/34751] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:22,601 INFO [Listener at localhost/34751] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:22,705 WARN [BP-1211094465-172.31.14.131-1689308177110 heartbeating to localhost/127.0.0.1:42129] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:22,705 WARN [BP-1211094465-172.31.14.131-1689308177110 heartbeating to localhost/127.0.0.1:42129] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1211094465-172.31.14.131-1689308177110 (Datanode Uuid a6b3f32e-14e2-4bce-8741-408e3536106a) service to localhost/127.0.0.1:42129 2023-07-14 04:16:22,706 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/dfs/data/data3/current/BP-1211094465-172.31.14.131-1689308177110] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:22,706 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/dfs/data/data4/current/BP-1211094465-172.31.14.131-1689308177110] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:22,707 WARN [Listener at localhost/34751] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:22,710 INFO [Listener at localhost/34751] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:22,817 WARN [BP-1211094465-172.31.14.131-1689308177110 heartbeating to localhost/127.0.0.1:42129] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:22,817 WARN [BP-1211094465-172.31.14.131-1689308177110 heartbeating to localhost/127.0.0.1:42129] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1211094465-172.31.14.131-1689308177110 (Datanode Uuid c2eda0a8-11fc-4254-b0ac-0616ebfec0f3) service to localhost/127.0.0.1:42129 2023-07-14 04:16:22,818 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/dfs/data/data1/current/BP-1211094465-172.31.14.131-1689308177110] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:22,818 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/cluster_71d88701-ffe8-f871-f89f-1130090041ad/dfs/data/data2/current/BP-1211094465-172.31.14.131-1689308177110] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:22,828 INFO [Listener at localhost/34751] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:22,944 INFO [Listener at localhost/34751] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-14 04:16:22,970 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-14 04:16:22,970 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-14 04:16:22,970 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.log.dir so I do NOT create it in target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2 2023-07-14 04:16:22,970 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/82234153-dd45-507a-6ebf-b0cc47ee7d51/hadoop.tmp.dir so I do NOT create it in target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2 2023-07-14 04:16:22,970 INFO [Listener at localhost/34751] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38, deleteOnExit=true 2023-07-14 04:16:22,970 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-14 04:16:22,971 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/test.cache.data in system properties and HBase conf 2023-07-14 04:16:22,971 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.tmp.dir in system properties and HBase conf 2023-07-14 04:16:22,971 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir in system properties and HBase conf 2023-07-14 04:16:22,971 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-14 04:16:22,971 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-14 04:16:22,971 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-14 04:16:22,971 DEBUG [Listener at localhost/34751] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-14 04:16:22,971 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-14 04:16:22,971 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/nfs.dump.dir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-14 04:16:22,972 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-14 04:16:22,973 INFO [Listener at localhost/34751] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-14 04:16:22,976 WARN [Listener at localhost/34751] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 04:16:22,977 WARN [Listener at localhost/34751] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 04:16:23,014 WARN [Listener at localhost/34751] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:16:23,016 INFO [Listener at localhost/34751] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:16:23,021 INFO [Listener at localhost/34751] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/Jetty_localhost_45939_hdfs____67a1ns/webapp 2023-07-14 04:16:23,042 DEBUG [Listener at localhost/34751-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101620ba956000a, quorum=127.0.0.1:62077, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-14 04:16:23,042 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101620ba956000a, quorum=127.0.0.1:62077, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-14 04:16:23,114 INFO [Listener at localhost/34751] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45939 2023-07-14 04:16:23,118 WARN [Listener at localhost/34751] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-14 04:16:23,119 WARN [Listener at localhost/34751] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-14 04:16:23,158 WARN [Listener at localhost/33863] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:16:23,171 WARN [Listener at localhost/33863] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:16:23,173 WARN [Listener at localhost/33863] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:16:23,175 INFO [Listener at localhost/33863] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:16:23,179 INFO [Listener at localhost/33863] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/Jetty_localhost_45007_datanode____7zjwfx/webapp 2023-07-14 04:16:23,271 INFO [Listener at localhost/33863] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45007 2023-07-14 04:16:23,278 WARN [Listener at localhost/40287] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:16:23,289 WARN [Listener at localhost/40287] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:16:23,291 WARN [Listener at localhost/40287] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:16:23,292 INFO [Listener at localhost/40287] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:16:23,295 INFO [Listener at localhost/40287] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/Jetty_localhost_44631_datanode____rzqyd7/webapp 2023-07-14 04:16:23,375 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc5466d66d2f276e5: Processing first storage report for DS-ecc455eb-0975-4a9c-937f-98cf061fa274 from datanode fc6f153f-2743-48e5-83f4-385a54b0a384 2023-07-14 04:16:23,375 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc5466d66d2f276e5: from storage DS-ecc455eb-0975-4a9c-937f-98cf061fa274 node DatanodeRegistration(127.0.0.1:46789, datanodeUuid=fc6f153f-2743-48e5-83f4-385a54b0a384, infoPort=44051, infoSecurePort=0, ipcPort=40287, storageInfo=lv=-57;cid=testClusterID;nsid=1183541460;c=1689308182979), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:23,375 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc5466d66d2f276e5: Processing first storage report for DS-8e4d46de-5f14-41f1-89e6-0135561130b3 from datanode fc6f153f-2743-48e5-83f4-385a54b0a384 2023-07-14 04:16:23,375 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc5466d66d2f276e5: from storage DS-8e4d46de-5f14-41f1-89e6-0135561130b3 node DatanodeRegistration(127.0.0.1:46789, datanodeUuid=fc6f153f-2743-48e5-83f4-385a54b0a384, infoPort=44051, infoSecurePort=0, ipcPort=40287, storageInfo=lv=-57;cid=testClusterID;nsid=1183541460;c=1689308182979), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:23,400 INFO [Listener at localhost/40287] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44631 2023-07-14 04:16:23,409 WARN [Listener at localhost/35923] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:16:23,427 WARN [Listener at localhost/35923] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-14 04:16:23,429 WARN [Listener at localhost/35923] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-14 04:16:23,430 INFO [Listener at localhost/35923] log.Slf4jLog(67): jetty-6.1.26 2023-07-14 04:16:23,435 INFO [Listener at localhost/35923] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/Jetty_localhost_40633_datanode____.iywb9b/webapp 2023-07-14 04:16:23,530 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5b477dcfb638aae4: Processing first storage report for DS-e787a120-6f7b-4dce-bbd9-43f1468e3969 from datanode b26285dd-7283-4221-8be6-99e8795826b9 2023-07-14 04:16:23,531 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5b477dcfb638aae4: from storage DS-e787a120-6f7b-4dce-bbd9-43f1468e3969 node DatanodeRegistration(127.0.0.1:45853, datanodeUuid=b26285dd-7283-4221-8be6-99e8795826b9, infoPort=43035, infoSecurePort=0, ipcPort=35923, storageInfo=lv=-57;cid=testClusterID;nsid=1183541460;c=1689308182979), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:23,531 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5b477dcfb638aae4: Processing first storage report for DS-c85fd2b7-aedc-4ae5-a048-4eefd75a64ba from datanode b26285dd-7283-4221-8be6-99e8795826b9 2023-07-14 04:16:23,531 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5b477dcfb638aae4: from storage DS-c85fd2b7-aedc-4ae5-a048-4eefd75a64ba node DatanodeRegistration(127.0.0.1:45853, datanodeUuid=b26285dd-7283-4221-8be6-99e8795826b9, infoPort=43035, infoSecurePort=0, ipcPort=35923, storageInfo=lv=-57;cid=testClusterID;nsid=1183541460;c=1689308182979), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:23,546 INFO [Listener at localhost/35923] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40633 2023-07-14 04:16:23,554 WARN [Listener at localhost/38975] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-14 04:16:23,664 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4fda06156433131: Processing first storage report for DS-16893484-af6d-4fa6-820c-6ad960fc5775 from datanode aeb955c7-9edd-4b44-bf51-695af2ee1119 2023-07-14 04:16:23,665 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4fda06156433131: from storage DS-16893484-af6d-4fa6-820c-6ad960fc5775 node DatanodeRegistration(127.0.0.1:33459, datanodeUuid=aeb955c7-9edd-4b44-bf51-695af2ee1119, infoPort=35087, infoSecurePort=0, ipcPort=38975, storageInfo=lv=-57;cid=testClusterID;nsid=1183541460;c=1689308182979), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-14 04:16:23,665 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4fda06156433131: Processing first storage report for DS-3418cea9-6ebf-4a40-9597-91936b94859a from datanode aeb955c7-9edd-4b44-bf51-695af2ee1119 2023-07-14 04:16:23,665 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4fda06156433131: from storage DS-3418cea9-6ebf-4a40-9597-91936b94859a node DatanodeRegistration(127.0.0.1:33459, datanodeUuid=aeb955c7-9edd-4b44-bf51-695af2ee1119, infoPort=35087, infoSecurePort=0, ipcPort=38975, storageInfo=lv=-57;cid=testClusterID;nsid=1183541460;c=1689308182979), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-14 04:16:23,667 DEBUG [Listener at localhost/38975] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2 2023-07-14 04:16:23,672 INFO [Listener at localhost/38975] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/zookeeper_0, clientPort=62981, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-14 04:16:23,673 INFO [Listener at localhost/38975] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62981 2023-07-14 04:16:23,673 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:23,674 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:23,693 INFO [Listener at localhost/38975] util.FSUtils(471): Created version file at hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b with version=8 2023-07-14 04:16:23,694 INFO [Listener at localhost/38975] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:33983/user/jenkins/test-data/73340e07-5c9f-8f5c-47bf-558c02bbecb4/hbase-staging 2023-07-14 04:16:23,694 DEBUG [Listener at localhost/38975] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-14 04:16:23,695 DEBUG [Listener at localhost/38975] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-14 04:16:23,695 DEBUG [Listener at localhost/38975] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-14 04:16:23,695 DEBUG [Listener at localhost/38975] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-14 04:16:23,695 INFO [Listener at localhost/38975] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:23,696 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:23,696 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:23,696 INFO [Listener at localhost/38975] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:23,696 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:23,696 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:23,696 INFO [Listener at localhost/38975] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:23,698 INFO [Listener at localhost/38975] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44421 2023-07-14 04:16:23,698 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:23,699 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:23,700 INFO [Listener at localhost/38975] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44421 connecting to ZooKeeper ensemble=127.0.0.1:62981 2023-07-14 04:16:23,707 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:444210x0, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:23,708 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44421-0x101620bbf7a0000 connected 2023-07-14 04:16:23,721 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:23,722 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:23,722 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:23,723 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44421 2023-07-14 04:16:23,723 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44421 2023-07-14 04:16:23,723 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44421 2023-07-14 04:16:23,723 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44421 2023-07-14 04:16:23,723 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44421 2023-07-14 04:16:23,725 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:23,725 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:23,725 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:23,726 INFO [Listener at localhost/38975] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-14 04:16:23,726 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:23,726 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:23,726 INFO [Listener at localhost/38975] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:23,727 INFO [Listener at localhost/38975] http.HttpServer(1146): Jetty bound to port 41355 2023-07-14 04:16:23,727 INFO [Listener at localhost/38975] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:23,728 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:23,728 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64e35faa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:23,728 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:23,728 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@eb3294d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:23,845 INFO [Listener at localhost/38975] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:23,846 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:23,846 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:23,846 INFO [Listener at localhost/38975] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 04:16:23,848 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:23,849 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7b38f305{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/jetty-0_0_0_0-41355-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7276022300565654552/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-14 04:16:23,850 INFO [Listener at localhost/38975] server.AbstractConnector(333): Started ServerConnector@65704643{HTTP/1.1, (http/1.1)}{0.0.0.0:41355} 2023-07-14 04:16:23,851 INFO [Listener at localhost/38975] server.Server(415): Started @42728ms 2023-07-14 04:16:23,851 INFO [Listener at localhost/38975] master.HMaster(444): hbase.rootdir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b, hbase.cluster.distributed=false 2023-07-14 04:16:23,865 INFO [Listener at localhost/38975] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:23,865 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:23,865 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:23,865 INFO [Listener at localhost/38975] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:23,865 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:23,865 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:23,865 INFO [Listener at localhost/38975] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:23,866 INFO [Listener at localhost/38975] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39705 2023-07-14 04:16:23,867 INFO [Listener at localhost/38975] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:16:23,868 DEBUG [Listener at localhost/38975] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:16:23,868 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:23,870 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:23,870 INFO [Listener at localhost/38975] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39705 connecting to ZooKeeper ensemble=127.0.0.1:62981 2023-07-14 04:16:23,874 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:397050x0, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:23,876 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39705-0x101620bbf7a0001 connected 2023-07-14 04:16:23,876 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:23,876 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:23,877 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:23,882 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39705 2023-07-14 04:16:23,882 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39705 2023-07-14 04:16:23,882 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39705 2023-07-14 04:16:23,882 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39705 2023-07-14 04:16:23,883 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39705 2023-07-14 04:16:23,884 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:23,885 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:23,885 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:23,885 INFO [Listener at localhost/38975] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:16:23,885 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:23,885 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:23,885 INFO [Listener at localhost/38975] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:23,886 INFO [Listener at localhost/38975] http.HttpServer(1146): Jetty bound to port 43131 2023-07-14 04:16:23,886 INFO [Listener at localhost/38975] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:23,887 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:23,887 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@782404d1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:23,887 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:23,888 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@a0a1c5c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:24,005 INFO [Listener at localhost/38975] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:24,006 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:24,006 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:24,006 INFO [Listener at localhost/38975] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 04:16:24,007 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:24,008 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@54c5488a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/jetty-0_0_0_0-43131-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5979366811705762762/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:24,009 INFO [Listener at localhost/38975] server.AbstractConnector(333): Started ServerConnector@7dbd2218{HTTP/1.1, (http/1.1)}{0.0.0.0:43131} 2023-07-14 04:16:24,009 INFO [Listener at localhost/38975] server.Server(415): Started @42887ms 2023-07-14 04:16:24,021 INFO [Listener at localhost/38975] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:24,021 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:24,021 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:24,022 INFO [Listener at localhost/38975] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:24,022 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:24,022 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:24,022 INFO [Listener at localhost/38975] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:24,022 INFO [Listener at localhost/38975] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35253 2023-07-14 04:16:24,023 INFO [Listener at localhost/38975] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:16:24,024 DEBUG [Listener at localhost/38975] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:16:24,025 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:24,026 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:24,026 INFO [Listener at localhost/38975] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35253 connecting to ZooKeeper ensemble=127.0.0.1:62981 2023-07-14 04:16:24,030 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:352530x0, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:24,031 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:352530x0, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:24,031 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35253-0x101620bbf7a0002 connected 2023-07-14 04:16:24,032 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:24,032 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:24,033 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35253 2023-07-14 04:16:24,033 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35253 2023-07-14 04:16:24,033 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35253 2023-07-14 04:16:24,033 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35253 2023-07-14 04:16:24,034 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35253 2023-07-14 04:16:24,035 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:24,035 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:24,035 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:24,036 INFO [Listener at localhost/38975] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:16:24,036 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:24,036 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:24,036 INFO [Listener at localhost/38975] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:24,037 INFO [Listener at localhost/38975] http.HttpServer(1146): Jetty bound to port 42433 2023-07-14 04:16:24,037 INFO [Listener at localhost/38975] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:24,038 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:24,038 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1989f106{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:24,038 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:24,039 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@642793fd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:24,152 INFO [Listener at localhost/38975] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:24,153 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:24,153 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:24,153 INFO [Listener at localhost/38975] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 04:16:24,154 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:24,155 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3a3201c6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/jetty-0_0_0_0-42433-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7437026165923969305/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:24,157 INFO [Listener at localhost/38975] server.AbstractConnector(333): Started ServerConnector@4cdfbf9d{HTTP/1.1, (http/1.1)}{0.0.0.0:42433} 2023-07-14 04:16:24,157 INFO [Listener at localhost/38975] server.Server(415): Started @43034ms 2023-07-14 04:16:24,169 INFO [Listener at localhost/38975] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:24,169 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:24,169 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:24,169 INFO [Listener at localhost/38975] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:24,169 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:24,169 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:24,170 INFO [Listener at localhost/38975] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:24,170 INFO [Listener at localhost/38975] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35659 2023-07-14 04:16:24,171 INFO [Listener at localhost/38975] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:16:24,172 DEBUG [Listener at localhost/38975] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:16:24,173 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:24,173 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:24,174 INFO [Listener at localhost/38975] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35659 connecting to ZooKeeper ensemble=127.0.0.1:62981 2023-07-14 04:16:24,179 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:356590x0, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:24,180 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:356590x0, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:24,181 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35659-0x101620bbf7a0003 connected 2023-07-14 04:16:24,181 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:24,181 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:24,182 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35659 2023-07-14 04:16:24,182 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35659 2023-07-14 04:16:24,183 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35659 2023-07-14 04:16:24,183 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35659 2023-07-14 04:16:24,183 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35659 2023-07-14 04:16:24,185 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:24,185 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:24,185 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:24,186 INFO [Listener at localhost/38975] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:16:24,186 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:24,186 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:24,186 INFO [Listener at localhost/38975] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:24,186 INFO [Listener at localhost/38975] http.HttpServer(1146): Jetty bound to port 44343 2023-07-14 04:16:24,186 INFO [Listener at localhost/38975] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:24,191 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:24,191 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c6daf69{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:24,191 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:24,191 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@179e7414{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:24,314 INFO [Listener at localhost/38975] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:24,315 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:24,315 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:24,315 INFO [Listener at localhost/38975] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-14 04:16:24,316 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:24,316 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2ad0ad4c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/jetty-0_0_0_0-44343-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3710621051939467715/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:24,318 INFO [Listener at localhost/38975] server.AbstractConnector(333): Started ServerConnector@352219ef{HTTP/1.1, (http/1.1)}{0.0.0.0:44343} 2023-07-14 04:16:24,318 INFO [Listener at localhost/38975] server.Server(415): Started @43195ms 2023-07-14 04:16:24,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:24,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@2dc7c695{HTTP/1.1, (http/1.1)}{0.0.0.0:44095} 2023-07-14 04:16:24,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43201ms 2023-07-14 04:16:24,324 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:24,325 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 04:16:24,325 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:24,327 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:24,327 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:24,327 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:24,327 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:24,327 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:24,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 04:16:24,331 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 04:16:24,331 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44421,1689308183695 from backup master directory 2023-07-14 04:16:24,332 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:24,332 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-14 04:16:24,332 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:24,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:24,350 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/hbase.id with ID: 1c816a60-4be4-4034-8899-7464f4b8a5ee 2023-07-14 04:16:24,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:24,364 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:24,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x066dfb79 to 127.0.0.1:62981 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:24,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@431b4345, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:24,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:24,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-14 04:16:24,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:24,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store-tmp 2023-07-14 04:16:24,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:24,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 04:16:24,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:24,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:24,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 04:16:24,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:24,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:24,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:16:24,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/WALs/jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:24,416 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44421%2C1689308183695, suffix=, logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/WALs/jenkins-hbase4.apache.org,44421,1689308183695, archiveDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/oldWALs, maxLogs=10 2023-07-14 04:16:24,430 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK] 2023-07-14 04:16:24,431 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK] 2023-07-14 04:16:24,432 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK] 2023-07-14 04:16:24,435 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/WALs/jenkins-hbase4.apache.org,44421,1689308183695/jenkins-hbase4.apache.org%2C44421%2C1689308183695.1689308184416 2023-07-14 04:16:24,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK], DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK], DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK]] 2023-07-14 04:16:24,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:24,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:24,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:24,435 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:24,439 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:24,441 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-14 04:16:24,441 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-14 04:16:24,442 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:24,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:24,443 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:24,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-14 04:16:24,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:24,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11122840320, jitterRate=0.03589522838592529}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:24,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:16:24,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-14 04:16:24,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-14 04:16:24,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-14 04:16:24,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-14 04:16:24,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-14 04:16:24,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-14 04:16:24,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-14 04:16:24,453 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-14 04:16:24,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-14 04:16:24,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-14 04:16:24,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-14 04:16:24,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-14 04:16:24,457 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:24,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-14 04:16:24,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-14 04:16:24,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-14 04:16:24,461 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:24,461 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:24,461 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:24,461 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:24,461 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:24,464 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44421,1689308183695, sessionid=0x101620bbf7a0000, setting cluster-up flag (Was=false) 2023-07-14 04:16:24,467 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:24,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-14 04:16:24,474 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:24,478 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:24,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-14 04:16:24,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:24,484 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.hbase-snapshot/.tmp 2023-07-14 04:16:24,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-14 04:16:24,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-14 04:16:24,492 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-14 04:16:24,496 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:16:24,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-14 04:16:24,498 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-14 04:16:24,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 04:16:24,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 04:16:24,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-14 04:16:24,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-14 04:16:24,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:16:24,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:16:24,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:16:24,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-14 04:16:24,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-14 04:16:24,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:24,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689308214514 2023-07-14 04:16:24,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-14 04:16:24,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-14 04:16:24,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-14 04:16:24,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-14 04:16:24,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-14 04:16:24,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-14 04:16:24,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,515 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 04:16:24,515 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-14 04:16:24,516 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-14 04:16:24,517 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:24,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-14 04:16:24,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-14 04:16:24,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-14 04:16:24,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-14 04:16:24,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308184527,5,FailOnTimeoutGroup] 2023-07-14 04:16:24,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308184527,5,FailOnTimeoutGroup] 2023-07-14 04:16:24,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-14 04:16:24,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,528 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(951): ClusterId : 1c816a60-4be4-4034-8899-7464f4b8a5ee 2023-07-14 04:16:24,528 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(951): ClusterId : 1c816a60-4be4-4034-8899-7464f4b8a5ee 2023-07-14 04:16:24,528 DEBUG [RS:2;jenkins-hbase4:35659] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:16:24,528 DEBUG [RS:0;jenkins-hbase4:39705] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:16:24,530 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(951): ClusterId : 1c816a60-4be4-4034-8899-7464f4b8a5ee 2023-07-14 04:16:24,530 DEBUG [RS:2;jenkins-hbase4:35659] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:16:24,530 DEBUG [RS:2;jenkins-hbase4:35659] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:16:24,530 DEBUG [RS:0;jenkins-hbase4:39705] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:16:24,530 DEBUG [RS:1;jenkins-hbase4:35253] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:16:24,531 DEBUG [RS:0;jenkins-hbase4:39705] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:16:24,534 DEBUG [RS:2;jenkins-hbase4:35659] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:16:24,535 DEBUG [RS:2;jenkins-hbase4:35659] zookeeper.ReadOnlyZKClient(139): Connect 0x67397cdc to 127.0.0.1:62981 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:24,536 DEBUG [RS:0;jenkins-hbase4:39705] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:16:24,536 DEBUG [RS:1;jenkins-hbase4:35253] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:16:24,536 DEBUG [RS:1;jenkins-hbase4:35253] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:16:24,538 DEBUG [RS:1;jenkins-hbase4:35253] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:16:24,545 DEBUG [RS:0;jenkins-hbase4:39705] zookeeper.ReadOnlyZKClient(139): Connect 0x67fab74b to 127.0.0.1:62981 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:24,545 DEBUG [RS:1;jenkins-hbase4:35253] zookeeper.ReadOnlyZKClient(139): Connect 0x75e94ef0 to 127.0.0.1:62981 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:24,561 DEBUG [RS:2;jenkins-hbase4:35659] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69fe9e4b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:24,561 DEBUG [RS:2;jenkins-hbase4:35659] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6dc117ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:24,566 DEBUG [RS:1;jenkins-hbase4:35253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52b259d1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:24,567 DEBUG [RS:1;jenkins-hbase4:35253] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44d6bd50, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:24,567 DEBUG [RS:0;jenkins-hbase4:39705] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4de0ddf2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:24,567 DEBUG [RS:0;jenkins-hbase4:39705] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5657d503, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:24,573 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35659 2023-07-14 04:16:24,573 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:24,573 INFO [RS:2;jenkins-hbase4:35659] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:16:24,573 INFO [RS:2;jenkins-hbase4:35659] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:16:24,573 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:16:24,574 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:24,574 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b 2023-07-14 04:16:24,574 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44421,1689308183695 with isa=jenkins-hbase4.apache.org/172.31.14.131:35659, startcode=1689308184169 2023-07-14 04:16:24,574 DEBUG [RS:2;jenkins-hbase4:35659] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:16:24,577 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35253 2023-07-14 04:16:24,577 INFO [RS:1;jenkins-hbase4:35253] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:16:24,577 INFO [RS:1;jenkins-hbase4:35253] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:16:24,577 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:16:24,578 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44421,1689308183695 with isa=jenkins-hbase4.apache.org/172.31.14.131:35253, startcode=1689308184021 2023-07-14 04:16:24,578 DEBUG [RS:1;jenkins-hbase4:35253] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:16:24,578 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39705 2023-07-14 04:16:24,578 INFO [RS:0;jenkins-hbase4:39705] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:16:24,578 INFO [RS:0;jenkins-hbase4:39705] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:16:24,578 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:16:24,579 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44421,1689308183695 with isa=jenkins-hbase4.apache.org/172.31.14.131:39705, startcode=1689308183864 2023-07-14 04:16:24,579 DEBUG [RS:0;jenkins-hbase4:39705] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:16:24,579 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50053, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:16:24,585 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44421] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:24,585 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:16:24,586 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-14 04:16:24,586 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54905, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:16:24,587 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44421] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:24,587 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:16:24,587 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-14 04:16:24,587 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b 2023-07-14 04:16:24,587 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33863 2023-07-14 04:16:24,587 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b 2023-07-14 04:16:24,587 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41355 2023-07-14 04:16:24,587 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47857, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:16:24,587 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33863 2023-07-14 04:16:24,588 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41355 2023-07-14 04:16:24,588 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44421] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:24,588 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:16:24,588 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-14 04:16:24,588 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b 2023-07-14 04:16:24,588 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33863 2023-07-14 04:16:24,588 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41355 2023-07-14 04:16:24,589 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:24,594 DEBUG [RS:1;jenkins-hbase4:35253] zookeeper.ZKUtil(162): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:24,594 WARN [RS:1;jenkins-hbase4:35253] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:24,594 DEBUG [RS:2;jenkins-hbase4:35659] zookeeper.ZKUtil(162): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:24,594 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35253,1689308184021] 2023-07-14 04:16:24,594 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35659,1689308184169] 2023-07-14 04:16:24,594 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39705,1689308183864] 2023-07-14 04:16:24,594 WARN [RS:2;jenkins-hbase4:35659] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:24,594 INFO [RS:1;jenkins-hbase4:35253] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:24,594 DEBUG [RS:0;jenkins-hbase4:39705] zookeeper.ZKUtil(162): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:24,595 INFO [RS:2;jenkins-hbase4:35659] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:24,595 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:24,595 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:24,595 WARN [RS:0;jenkins-hbase4:39705] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:24,595 INFO [RS:0;jenkins-hbase4:39705] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:24,595 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:24,612 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:24,616 DEBUG [RS:1;jenkins-hbase4:35253] zookeeper.ZKUtil(162): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:24,616 DEBUG [RS:2;jenkins-hbase4:35659] zookeeper.ZKUtil(162): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:24,616 DEBUG [RS:0;jenkins-hbase4:39705] zookeeper.ZKUtil(162): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:24,617 DEBUG [RS:1;jenkins-hbase4:35253] zookeeper.ZKUtil(162): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:24,617 DEBUG [RS:2;jenkins-hbase4:35659] zookeeper.ZKUtil(162): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:24,617 DEBUG [RS:0;jenkins-hbase4:39705] zookeeper.ZKUtil(162): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:24,617 DEBUG [RS:1;jenkins-hbase4:35253] zookeeper.ZKUtil(162): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:24,617 DEBUG [RS:2;jenkins-hbase4:35659] zookeeper.ZKUtil(162): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:24,617 DEBUG [RS:0;jenkins-hbase4:39705] zookeeper.ZKUtil(162): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:24,618 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:16:24,618 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:16:24,618 INFO [RS:1;jenkins-hbase4:35253] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:16:24,618 INFO [RS:2;jenkins-hbase4:35659] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:16:24,619 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:16:24,620 INFO [RS:0;jenkins-hbase4:39705] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:16:24,623 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 04:16:24,627 INFO [RS:0;jenkins-hbase4:39705] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:16:24,631 INFO [RS:1;jenkins-hbase4:35253] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:16:24,631 INFO [RS:2;jenkins-hbase4:35659] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:16:24,632 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/info 2023-07-14 04:16:24,632 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 04:16:24,633 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:24,633 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 04:16:24,635 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:16:24,635 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 04:16:24,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:24,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 04:16:24,638 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/table 2023-07-14 04:16:24,638 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 04:16:24,639 INFO [RS:0;jenkins-hbase4:39705] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:16:24,639 INFO [RS:0;jenkins-hbase4:39705] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,639 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:24,642 INFO [RS:1;jenkins-hbase4:35253] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:16:24,642 INFO [RS:2;jenkins-hbase4:35659] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:16:24,642 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:16:24,642 INFO [RS:2;jenkins-hbase4:35659] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,642 INFO [RS:1;jenkins-hbase4:35253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,642 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:16:24,644 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:16:24,642 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740 2023-07-14 04:16:24,645 INFO [RS:0;jenkins-hbase4:39705] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,646 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,646 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740 2023-07-14 04:16:24,646 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,647 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,647 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,647 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,647 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:24,647 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,647 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,647 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,647 DEBUG [RS:0;jenkins-hbase4:39705] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,653 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 04:16:24,655 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 04:16:24,658 INFO [RS:0;jenkins-hbase4:39705] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,658 INFO [RS:1;jenkins-hbase4:35253] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,660 INFO [RS:2;jenkins-hbase4:35659] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,660 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,658 INFO [RS:0;jenkins-hbase4:39705] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,660 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,660 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,660 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,660 INFO [RS:0;jenkins-hbase4:39705] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,661 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:24,661 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:1;jenkins-hbase4:35253] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,660 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,661 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:24,662 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,662 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,662 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,662 DEBUG [RS:2;jenkins-hbase4:35659] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:24,670 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:24,671 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10248914080, jitterRate=-0.045495495200157166}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 04:16:24,671 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 04:16:24,672 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 04:16:24,672 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 04:16:24,672 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 04:16:24,672 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 04:16:24,672 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 04:16:24,682 INFO [RS:1;jenkins-hbase4:35253] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,683 INFO [RS:1;jenkins-hbase4:35253] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,687 INFO [RS:1;jenkins-hbase4:35253] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,687 INFO [RS:0;jenkins-hbase4:39705] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:16:24,687 INFO [RS:0;jenkins-hbase4:39705] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39705,1689308183864-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,688 INFO [RS:2;jenkins-hbase4:35659] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,688 INFO [RS:2;jenkins-hbase4:35659] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,688 INFO [RS:2;jenkins-hbase4:35659] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,691 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 04:16:24,691 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 04:16:24,692 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-14 04:16:24,692 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-14 04:16:24,692 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-14 04:16:24,695 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-14 04:16:24,698 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-14 04:16:24,701 INFO [RS:1;jenkins-hbase4:35253] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:16:24,701 INFO [RS:1;jenkins-hbase4:35253] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35253,1689308184021-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,706 INFO [RS:0;jenkins-hbase4:39705] regionserver.Replication(203): jenkins-hbase4.apache.org,39705,1689308183864 started 2023-07-14 04:16:24,707 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39705,1689308183864, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39705, sessionid=0x101620bbf7a0001 2023-07-14 04:16:24,707 DEBUG [RS:0;jenkins-hbase4:39705] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:16:24,707 DEBUG [RS:0;jenkins-hbase4:39705] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:24,707 DEBUG [RS:0;jenkins-hbase4:39705] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39705,1689308183864' 2023-07-14 04:16:24,707 DEBUG [RS:0;jenkins-hbase4:39705] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:16:24,707 DEBUG [RS:0;jenkins-hbase4:39705] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:16:24,708 DEBUG [RS:0;jenkins-hbase4:39705] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:16:24,708 DEBUG [RS:0;jenkins-hbase4:39705] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:16:24,708 DEBUG [RS:0;jenkins-hbase4:39705] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:24,708 DEBUG [RS:0;jenkins-hbase4:39705] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39705,1689308183864' 2023-07-14 04:16:24,708 DEBUG [RS:0;jenkins-hbase4:39705] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:16:24,708 DEBUG [RS:0;jenkins-hbase4:39705] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:16:24,709 DEBUG [RS:0;jenkins-hbase4:39705] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:16:24,709 INFO [RS:2;jenkins-hbase4:35659] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:16:24,709 INFO [RS:0;jenkins-hbase4:39705] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 04:16:24,709 INFO [RS:0;jenkins-hbase4:39705] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 04:16:24,709 INFO [RS:2;jenkins-hbase4:35659] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35659,1689308184169-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:24,729 INFO [RS:1;jenkins-hbase4:35253] regionserver.Replication(203): jenkins-hbase4.apache.org,35253,1689308184021 started 2023-07-14 04:16:24,730 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35253,1689308184021, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35253, sessionid=0x101620bbf7a0002 2023-07-14 04:16:24,730 DEBUG [RS:1;jenkins-hbase4:35253] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:16:24,730 DEBUG [RS:1;jenkins-hbase4:35253] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:24,730 DEBUG [RS:1;jenkins-hbase4:35253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35253,1689308184021' 2023-07-14 04:16:24,730 DEBUG [RS:1;jenkins-hbase4:35253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:16:24,730 DEBUG [RS:1;jenkins-hbase4:35253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:16:24,731 DEBUG [RS:1;jenkins-hbase4:35253] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:16:24,731 DEBUG [RS:1;jenkins-hbase4:35253] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:16:24,731 DEBUG [RS:1;jenkins-hbase4:35253] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:24,731 DEBUG [RS:1;jenkins-hbase4:35253] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35253,1689308184021' 2023-07-14 04:16:24,731 DEBUG [RS:1;jenkins-hbase4:35253] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:16:24,731 DEBUG [RS:1;jenkins-hbase4:35253] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:16:24,732 DEBUG [RS:1;jenkins-hbase4:35253] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:16:24,732 INFO [RS:1;jenkins-hbase4:35253] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 04:16:24,732 INFO [RS:1;jenkins-hbase4:35253] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 04:16:24,735 INFO [RS:2;jenkins-hbase4:35659] regionserver.Replication(203): jenkins-hbase4.apache.org,35659,1689308184169 started 2023-07-14 04:16:24,735 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35659,1689308184169, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35659, sessionid=0x101620bbf7a0003 2023-07-14 04:16:24,735 DEBUG [RS:2;jenkins-hbase4:35659] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:16:24,735 DEBUG [RS:2;jenkins-hbase4:35659] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:24,735 DEBUG [RS:2;jenkins-hbase4:35659] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35659,1689308184169' 2023-07-14 04:16:24,735 DEBUG [RS:2;jenkins-hbase4:35659] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:16:24,735 DEBUG [RS:2;jenkins-hbase4:35659] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:16:24,736 DEBUG [RS:2;jenkins-hbase4:35659] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:16:24,736 DEBUG [RS:2;jenkins-hbase4:35659] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:16:24,736 DEBUG [RS:2;jenkins-hbase4:35659] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:24,736 DEBUG [RS:2;jenkins-hbase4:35659] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35659,1689308184169' 2023-07-14 04:16:24,736 DEBUG [RS:2;jenkins-hbase4:35659] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:16:24,736 DEBUG [RS:2;jenkins-hbase4:35659] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:16:24,737 DEBUG [RS:2;jenkins-hbase4:35659] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:16:24,737 INFO [RS:2;jenkins-hbase4:35659] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 04:16:24,737 INFO [RS:2;jenkins-hbase4:35659] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 04:16:24,811 INFO [RS:0;jenkins-hbase4:39705] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39705%2C1689308183864, suffix=, logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,39705,1689308183864, archiveDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs, maxLogs=32 2023-07-14 04:16:24,836 INFO [RS:1;jenkins-hbase4:35253] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35253%2C1689308184021, suffix=, logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,35253,1689308184021, archiveDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs, maxLogs=32 2023-07-14 04:16:24,842 INFO [RS:2;jenkins-hbase4:35659] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35659%2C1689308184169, suffix=, logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,35659,1689308184169, archiveDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs, maxLogs=32 2023-07-14 04:16:24,846 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK] 2023-07-14 04:16:24,847 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK] 2023-07-14 04:16:24,847 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK] 2023-07-14 04:16:24,848 DEBUG [jenkins-hbase4:44421] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-14 04:16:24,849 DEBUG [jenkins-hbase4:44421] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:24,849 DEBUG [jenkins-hbase4:44421] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:24,849 DEBUG [jenkins-hbase4:44421] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:24,849 DEBUG [jenkins-hbase4:44421] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:24,849 DEBUG [jenkins-hbase4:44421] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:24,860 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35659,1689308184169, state=OPENING 2023-07-14 04:16:24,861 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-14 04:16:24,863 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:24,864 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35659,1689308184169}] 2023-07-14 04:16:24,864 INFO [RS:0;jenkins-hbase4:39705] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,39705,1689308183864/jenkins-hbase4.apache.org%2C39705%2C1689308183864.1689308184811 2023-07-14 04:16:24,864 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:16:24,868 DEBUG [RS:0;jenkins-hbase4:39705] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK], DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK], DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK]] 2023-07-14 04:16:24,872 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK] 2023-07-14 04:16:24,879 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK] 2023-07-14 04:16:24,879 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK] 2023-07-14 04:16:24,889 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-14 04:16:24,897 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK] 2023-07-14 04:16:24,897 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK] 2023-07-14 04:16:24,897 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK] 2023-07-14 04:16:24,898 INFO [RS:1;jenkins-hbase4:35253] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,35253,1689308184021/jenkins-hbase4.apache.org%2C35253%2C1689308184021.1689308184836 2023-07-14 04:16:24,902 DEBUG [RS:1;jenkins-hbase4:35253] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK], DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK], DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK]] 2023-07-14 04:16:24,905 INFO [RS:2;jenkins-hbase4:35659] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,35659,1689308184169/jenkins-hbase4.apache.org%2C35659%2C1689308184169.1689308184842 2023-07-14 04:16:24,905 DEBUG [RS:2;jenkins-hbase4:35659] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK], DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK], DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK]] 2023-07-14 04:16:25,034 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:25,034 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:16:25,036 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60678, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:16:25,044 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-14 04:16:25,044 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:25,046 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35659%2C1689308184169.meta, suffix=.meta, logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,35659,1689308184169, archiveDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs, maxLogs=32 2023-07-14 04:16:25,066 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK] 2023-07-14 04:16:25,066 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK] 2023-07-14 04:16:25,071 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK] 2023-07-14 04:16:25,075 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,35659,1689308184169/jenkins-hbase4.apache.org%2C35659%2C1689308184169.meta.1689308185047.meta 2023-07-14 04:16:25,075 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK], DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK], DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK]] 2023-07-14 04:16:25,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:25,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 04:16:25,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-14 04:16:25,076 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-14 04:16:25,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-14 04:16:25,076 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:25,077 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-14 04:16:25,077 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-14 04:16:25,081 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-14 04:16:25,082 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/info 2023-07-14 04:16:25,083 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/info 2023-07-14 04:16:25,083 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-14 04:16:25,084 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:25,084 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-14 04:16:25,085 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:16:25,085 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/rep_barrier 2023-07-14 04:16:25,085 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-14 04:16:25,085 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:25,086 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-14 04:16:25,087 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/table 2023-07-14 04:16:25,087 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/table 2023-07-14 04:16:25,087 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-14 04:16:25,087 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:25,088 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740 2023-07-14 04:16:25,089 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740 2023-07-14 04:16:25,091 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-14 04:16:25,092 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-14 04:16:25,092 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9819052640, jitterRate=-0.08552946150302887}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-14 04:16:25,092 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-14 04:16:25,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689308185034 2023-07-14 04:16:25,098 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-14 04:16:25,099 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-14 04:16:25,099 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35659,1689308184169, state=OPEN 2023-07-14 04:16:25,101 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-14 04:16:25,101 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-14 04:16:25,102 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-14 04:16:25,102 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35659,1689308184169 in 237 msec 2023-07-14 04:16:25,104 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-14 04:16:25,105 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 410 msec 2023-07-14 04:16:25,106 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 608 msec 2023-07-14 04:16:25,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689308185107, completionTime=-1 2023-07-14 04:16:25,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-14 04:16:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-14 04:16:25,111 DEBUG [hconnection-0x7bfa5ae7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:25,112 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60694, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:25,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-14 04:16:25,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689308245114 2023-07-14 04:16:25,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689308305114 2023-07-14 04:16:25,114 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-14 04:16:25,114 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44421,1689308183695] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:25,115 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44421,1689308183695] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-14 04:16:25,116 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-14 04:16:25,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1689308183695-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1689308183695-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1689308183695-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44421, period=300000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-14 04:16:25,120 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:25,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:25,121 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:25,121 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-14 04:16:25,121 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-14 04:16:25,123 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:25,123 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,123 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:25,124 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971 empty. 2023-07-14 04:16:25,124 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,124 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-14 04:16:25,124 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,125 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606 empty. 2023-07-14 04:16:25,125 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,125 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-14 04:16:25,140 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:25,142 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:25,143 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4df932c55bcbb5ec85af558c057d2606, NAME => 'hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp 2023-07-14 04:16:25,143 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 768a0e28f09d1bbbf09bf2c25810f971, NAME => 'hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp 2023-07-14 04:16:25,159 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:25,159 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4df932c55bcbb5ec85af558c057d2606, disabling compactions & flushes 2023-07-14 04:16:25,159 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:25,159 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:25,159 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. after waiting 0 ms 2023-07-14 04:16:25,159 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:25,159 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:25,159 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4df932c55bcbb5ec85af558c057d2606: 2023-07-14 04:16:25,162 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:25,162 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:25,162 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 768a0e28f09d1bbbf09bf2c25810f971, disabling compactions & flushes 2023-07-14 04:16:25,162 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:25,162 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:25,162 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. after waiting 0 ms 2023-07-14 04:16:25,162 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:25,163 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:25,163 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 768a0e28f09d1bbbf09bf2c25810f971: 2023-07-14 04:16:25,163 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308185163"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308185163"}]},"ts":"1689308185163"} 2023-07-14 04:16:25,166 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:25,166 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:25,167 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:25,167 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308185167"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308185167"}]},"ts":"1689308185167"} 2023-07-14 04:16:25,167 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308185167"}]},"ts":"1689308185167"} 2023-07-14 04:16:25,168 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:25,169 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-14 04:16:25,169 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:25,169 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308185169"}]},"ts":"1689308185169"} 2023-07-14 04:16:25,170 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-14 04:16:25,173 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:25,173 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:25,173 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:25,173 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:25,173 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:25,173 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4df932c55bcbb5ec85af558c057d2606, ASSIGN}] 2023-07-14 04:16:25,175 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4df932c55bcbb5ec85af558c057d2606, ASSIGN 2023-07-14 04:16:25,176 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:25,176 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:25,176 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:25,176 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:25,176 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:25,176 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4df932c55bcbb5ec85af558c057d2606, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35253,1689308184021; forceNewPlan=false, retain=false 2023-07-14 04:16:25,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=768a0e28f09d1bbbf09bf2c25810f971, ASSIGN}] 2023-07-14 04:16:25,177 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=768a0e28f09d1bbbf09bf2c25810f971, ASSIGN 2023-07-14 04:16:25,177 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=768a0e28f09d1bbbf09bf2c25810f971, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39705,1689308183864; forceNewPlan=false, retain=false 2023-07-14 04:16:25,178 INFO [jenkins-hbase4:44421] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-14 04:16:25,179 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4df932c55bcbb5ec85af558c057d2606, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:25,180 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308185179"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308185179"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308185179"}]},"ts":"1689308185179"} 2023-07-14 04:16:25,180 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=768a0e28f09d1bbbf09bf2c25810f971, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:25,180 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308185180"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308185180"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308185180"}]},"ts":"1689308185180"} 2023-07-14 04:16:25,181 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 4df932c55bcbb5ec85af558c057d2606, server=jenkins-hbase4.apache.org,35253,1689308184021}] 2023-07-14 04:16:25,181 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 768a0e28f09d1bbbf09bf2c25810f971, server=jenkins-hbase4.apache.org,39705,1689308183864}] 2023-07-14 04:16:25,334 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:25,334 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:25,334 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:16:25,335 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-14 04:16:25,336 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56712, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:16:25,336 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46508, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-14 04:16:25,339 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:25,339 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:25,339 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4df932c55bcbb5ec85af558c057d2606, NAME => 'hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:25,339 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 768a0e28f09d1bbbf09bf2c25810f971, NAME => 'hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. service=MultiRowMutationService 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,340 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,340 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,341 INFO [StoreOpener-4df932c55bcbb5ec85af558c057d2606-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,341 INFO [StoreOpener-768a0e28f09d1bbbf09bf2c25810f971-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,342 DEBUG [StoreOpener-4df932c55bcbb5ec85af558c057d2606-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606/info 2023-07-14 04:16:25,342 DEBUG [StoreOpener-4df932c55bcbb5ec85af558c057d2606-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606/info 2023-07-14 04:16:25,342 DEBUG [StoreOpener-768a0e28f09d1bbbf09bf2c25810f971-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971/m 2023-07-14 04:16:25,342 DEBUG [StoreOpener-768a0e28f09d1bbbf09bf2c25810f971-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971/m 2023-07-14 04:16:25,343 INFO [StoreOpener-768a0e28f09d1bbbf09bf2c25810f971-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 768a0e28f09d1bbbf09bf2c25810f971 columnFamilyName m 2023-07-14 04:16:25,343 INFO [StoreOpener-4df932c55bcbb5ec85af558c057d2606-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4df932c55bcbb5ec85af558c057d2606 columnFamilyName info 2023-07-14 04:16:25,343 INFO [StoreOpener-768a0e28f09d1bbbf09bf2c25810f971-1] regionserver.HStore(310): Store=768a0e28f09d1bbbf09bf2c25810f971/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:25,343 INFO [StoreOpener-4df932c55bcbb5ec85af558c057d2606-1] regionserver.HStore(310): Store=4df932c55bcbb5ec85af558c057d2606/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:25,344 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,344 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,344 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,344 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,347 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:25,347 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:25,350 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:25,351 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:25,351 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 768a0e28f09d1bbbf09bf2c25810f971; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@26871f37, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:25,351 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 768a0e28f09d1bbbf09bf2c25810f971: 2023-07-14 04:16:25,351 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4df932c55bcbb5ec85af558c057d2606; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10601482400, jitterRate=-0.012660011649131775}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:25,351 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4df932c55bcbb5ec85af558c057d2606: 2023-07-14 04:16:25,352 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971., pid=9, masterSystemTime=1689308185334 2023-07-14 04:16:25,353 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606., pid=8, masterSystemTime=1689308185334 2023-07-14 04:16:25,356 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:25,357 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:25,357 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=768a0e28f09d1bbbf09bf2c25810f971, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:25,357 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:25,357 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689308185357"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308185357"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308185357"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308185357"}]},"ts":"1689308185357"} 2023-07-14 04:16:25,358 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:25,358 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4df932c55bcbb5ec85af558c057d2606, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:25,358 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689308185358"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308185358"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308185358"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308185358"}]},"ts":"1689308185358"} 2023-07-14 04:16:25,361 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-14 04:16:25,361 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 768a0e28f09d1bbbf09bf2c25810f971, server=jenkins-hbase4.apache.org,39705,1689308183864 in 178 msec 2023-07-14 04:16:25,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-14 04:16:25,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 4df932c55bcbb5ec85af558c057d2606, server=jenkins-hbase4.apache.org,35253,1689308184021 in 179 msec 2023-07-14 04:16:25,362 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-14 04:16:25,362 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=768a0e28f09d1bbbf09bf2c25810f971, ASSIGN in 185 msec 2023-07-14 04:16:25,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-14 04:16:25,363 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:25,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4df932c55bcbb5ec85af558c057d2606, ASSIGN in 188 msec 2023-07-14 04:16:25,363 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308185363"}]},"ts":"1689308185363"} 2023-07-14 04:16:25,363 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:25,363 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308185363"}]},"ts":"1689308185363"} 2023-07-14 04:16:25,364 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-14 04:16:25,365 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-14 04:16:25,367 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:25,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 253 msec 2023-07-14 04:16:25,368 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:25,369 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 248 msec 2023-07-14 04:16:25,419 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44421,1689308183695] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:25,420 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46516, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:25,422 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-14 04:16:25,422 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-14 04:16:25,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-14 04:16:25,423 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:25,424 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:25,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:25,427 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:25,427 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:25,427 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56718, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:25,429 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-14 04:16:25,431 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 04:16:25,433 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-14 04:16:25,437 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:25,439 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-14 04:16:25,441 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-14 04:16:25,447 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:25,450 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-14 04:16:25,454 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-14 04:16:25,457 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-14 04:16:25,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.125sec 2023-07-14 04:16:25,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-14 04:16:25,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-14 04:16:25,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-14 04:16:25,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1689308183695-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-14 04:16:25,458 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44421,1689308183695-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-14 04:16:25,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-14 04:16:25,529 DEBUG [Listener at localhost/38975] zookeeper.ReadOnlyZKClient(139): Connect 0x17e40f1d to 127.0.0.1:62981 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:25,534 DEBUG [Listener at localhost/38975] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a933ce3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:25,536 DEBUG [hconnection-0x65064230-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:25,537 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60706, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:25,539 INFO [Listener at localhost/38975] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:25,539 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:25,541 DEBUG [Listener at localhost/38975] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-14 04:16:25,543 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36124, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-14 04:16:25,546 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-14 04:16:25,546 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:25,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-14 04:16:25,548 DEBUG [Listener at localhost/38975] zookeeper.ReadOnlyZKClient(139): Connect 0x0c6b1b47 to 127.0.0.1:62981 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:25,552 DEBUG [Listener at localhost/38975] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b2bdf38, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:25,552 INFO [Listener at localhost/38975] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62981 2023-07-14 04:16:25,555 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:25,556 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101620bbf7a000a connected 2023-07-14 04:16:25,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:25,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:25,561 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-14 04:16:25,572 INFO [Listener at localhost/38975] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-14 04:16:25,573 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:25,573 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:25,573 INFO [Listener at localhost/38975] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-14 04:16:25,573 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-14 04:16:25,573 INFO [Listener at localhost/38975] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-14 04:16:25,573 INFO [Listener at localhost/38975] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-14 04:16:25,574 INFO [Listener at localhost/38975] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46197 2023-07-14 04:16:25,574 INFO [Listener at localhost/38975] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-14 04:16:25,575 DEBUG [Listener at localhost/38975] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-14 04:16:25,575 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:25,576 INFO [Listener at localhost/38975] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-14 04:16:25,577 INFO [Listener at localhost/38975] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46197 connecting to ZooKeeper ensemble=127.0.0.1:62981 2023-07-14 04:16:25,580 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:461970x0, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-14 04:16:25,584 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(162): regionserver:461970x0, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-14 04:16:25,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46197-0x101620bbf7a000b connected 2023-07-14 04:16:25,584 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(162): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-14 04:16:25,585 DEBUG [Listener at localhost/38975] zookeeper.ZKUtil(164): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-14 04:16:25,585 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46197 2023-07-14 04:16:25,586 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46197 2023-07-14 04:16:25,586 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46197 2023-07-14 04:16:25,586 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46197 2023-07-14 04:16:25,586 DEBUG [Listener at localhost/38975] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46197 2023-07-14 04:16:25,588 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-14 04:16:25,588 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-14 04:16:25,588 INFO [Listener at localhost/38975] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-14 04:16:25,589 INFO [Listener at localhost/38975] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-14 04:16:25,589 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-14 04:16:25,589 INFO [Listener at localhost/38975] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-14 04:16:25,589 INFO [Listener at localhost/38975] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-14 04:16:25,589 INFO [Listener at localhost/38975] http.HttpServer(1146): Jetty bound to port 40627 2023-07-14 04:16:25,589 INFO [Listener at localhost/38975] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-14 04:16:25,591 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:25,591 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10a95ce4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,AVAILABLE} 2023-07-14 04:16:25,591 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:25,591 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@72d59f89{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-14 04:16:25,703 INFO [Listener at localhost/38975] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-14 04:16:25,704 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-14 04:16:25,704 INFO [Listener at localhost/38975] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-14 04:16:25,704 INFO [Listener at localhost/38975] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-14 04:16:25,705 INFO [Listener at localhost/38975] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-14 04:16:25,705 INFO [Listener at localhost/38975] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6387f4d5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/java.io.tmpdir/jetty-0_0_0_0-40627-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8453281957747026526/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:25,707 INFO [Listener at localhost/38975] server.AbstractConnector(333): Started ServerConnector@2014f237{HTTP/1.1, (http/1.1)}{0.0.0.0:40627} 2023-07-14 04:16:25,707 INFO [Listener at localhost/38975] server.Server(415): Started @44584ms 2023-07-14 04:16:25,709 INFO [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(951): ClusterId : 1c816a60-4be4-4034-8899-7464f4b8a5ee 2023-07-14 04:16:25,709 DEBUG [RS:3;jenkins-hbase4:46197] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-14 04:16:25,713 DEBUG [RS:3;jenkins-hbase4:46197] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-14 04:16:25,713 DEBUG [RS:3;jenkins-hbase4:46197] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-14 04:16:25,714 DEBUG [RS:3;jenkins-hbase4:46197] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-14 04:16:25,715 DEBUG [RS:3;jenkins-hbase4:46197] zookeeper.ReadOnlyZKClient(139): Connect 0x331c5a65 to 127.0.0.1:62981 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-14 04:16:25,719 DEBUG [RS:3;jenkins-hbase4:46197] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ddbb276, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-14 04:16:25,719 DEBUG [RS:3;jenkins-hbase4:46197] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39e43115, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:25,727 DEBUG [RS:3;jenkins-hbase4:46197] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:46197 2023-07-14 04:16:25,728 INFO [RS:3;jenkins-hbase4:46197] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-14 04:16:25,728 INFO [RS:3;jenkins-hbase4:46197] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-14 04:16:25,728 DEBUG [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1022): About to register with Master. 2023-07-14 04:16:25,728 INFO [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44421,1689308183695 with isa=jenkins-hbase4.apache.org/172.31.14.131:46197, startcode=1689308185572 2023-07-14 04:16:25,728 DEBUG [RS:3;jenkins-hbase4:46197] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-14 04:16:25,729 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44203, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-14 04:16:25,730 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44421] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,730 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-14 04:16:25,730 DEBUG [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b 2023-07-14 04:16:25,730 DEBUG [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33863 2023-07-14 04:16:25,730 DEBUG [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41355 2023-07-14 04:16:25,736 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:25,736 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:25,736 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:25,736 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:25,736 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:25,736 DEBUG [RS:3;jenkins-hbase4:46197] zookeeper.ZKUtil(162): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,736 WARN [RS:3;jenkins-hbase4:46197] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-14 04:16:25,737 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46197,1689308185572] 2023-07-14 04:16:25,737 INFO [RS:3;jenkins-hbase4:46197] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-14 04:16:25,737 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-14 04:16:25,737 DEBUG [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,737 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:25,737 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:25,737 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:25,740 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-14 04:16:25,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,740 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:25,741 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:25,742 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:25,742 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:25,742 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:25,742 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:25,743 DEBUG [RS:3;jenkins-hbase4:46197] zookeeper.ZKUtil(162): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:25,743 DEBUG [RS:3;jenkins-hbase4:46197] zookeeper.ZKUtil(162): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,744 DEBUG [RS:3;jenkins-hbase4:46197] zookeeper.ZKUtil(162): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:25,744 DEBUG [RS:3;jenkins-hbase4:46197] zookeeper.ZKUtil(162): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:25,745 DEBUG [RS:3;jenkins-hbase4:46197] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-14 04:16:25,745 INFO [RS:3;jenkins-hbase4:46197] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-14 04:16:25,746 INFO [RS:3;jenkins-hbase4:46197] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-14 04:16:25,746 INFO [RS:3;jenkins-hbase4:46197] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-14 04:16:25,746 INFO [RS:3;jenkins-hbase4:46197] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,746 INFO [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-14 04:16:25,748 INFO [RS:3;jenkins-hbase4:46197] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,748 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,748 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,748 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,748 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,748 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,749 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-14 04:16:25,749 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,749 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,749 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,749 DEBUG [RS:3;jenkins-hbase4:46197] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-14 04:16:25,749 INFO [RS:3;jenkins-hbase4:46197] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,750 INFO [RS:3;jenkins-hbase4:46197] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,750 INFO [RS:3;jenkins-hbase4:46197] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,761 INFO [RS:3;jenkins-hbase4:46197] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-14 04:16:25,761 INFO [RS:3;jenkins-hbase4:46197] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46197,1689308185572-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-14 04:16:25,772 INFO [RS:3;jenkins-hbase4:46197] regionserver.Replication(203): jenkins-hbase4.apache.org,46197,1689308185572 started 2023-07-14 04:16:25,772 INFO [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46197,1689308185572, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46197, sessionid=0x101620bbf7a000b 2023-07-14 04:16:25,772 DEBUG [RS:3;jenkins-hbase4:46197] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-14 04:16:25,772 DEBUG [RS:3;jenkins-hbase4:46197] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,772 DEBUG [RS:3;jenkins-hbase4:46197] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46197,1689308185572' 2023-07-14 04:16:25,772 DEBUG [RS:3;jenkins-hbase4:46197] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-14 04:16:25,772 DEBUG [RS:3;jenkins-hbase4:46197] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-14 04:16:25,773 DEBUG [RS:3;jenkins-hbase4:46197] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-14 04:16:25,773 DEBUG [RS:3;jenkins-hbase4:46197] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-14 04:16:25,773 DEBUG [RS:3;jenkins-hbase4:46197] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:25,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:25,773 DEBUG [RS:3;jenkins-hbase4:46197] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46197,1689308185572' 2023-07-14 04:16:25,773 DEBUG [RS:3;jenkins-hbase4:46197] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-14 04:16:25,773 DEBUG [RS:3;jenkins-hbase4:46197] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-14 04:16:25,773 DEBUG [RS:3;jenkins-hbase4:46197] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-14 04:16:25,773 INFO [RS:3;jenkins-hbase4:46197] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-14 04:16:25,773 INFO [RS:3;jenkins-hbase4:46197] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-14 04:16:25,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:25,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:25,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:25,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:25,780 DEBUG [hconnection-0xa29c91c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:25,782 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60712, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:25,785 DEBUG [hconnection-0xa29c91c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-14 04:16:25,787 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46518, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-14 04:16:25,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:25,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:25,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:25,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:25,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36124 deadline: 1689309385791, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:25,792 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:25,793 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:25,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:25,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:25,794 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:25,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:25,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:25,851 INFO [Listener at localhost/38975] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=564 (was 515) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp215050175-2310 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215050175-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x7bfa5ae7-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp766280167-2595 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2125164216@qtp-1373768995-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45939 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Listener at localhost/38975-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0xa29c91c-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:42129 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:33863 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x0c6b1b47 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7bfa5ae7-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/38975-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp537693746-2280 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7bfa5ae7-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp587218053-2322 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x75e94ef0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 40287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@b2fe258 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46197Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x67fab74b-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp587218053-2325-acceptor-0@2d3c2e64-ServerConnector@2dc7c695{HTTP/1.1, (http/1.1)}{0.0.0.0:44095} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: qtp1001287953-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 748054731@qtp-1063900309-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x67397cdc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:42129 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@67d7b2b8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3a1a940b java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:46197-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x7bfa5ae7-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 40287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1292433793@qtp-767825892-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44631 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35659 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:33863 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:33863 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@113eb2f6[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_96441579_17 at /127.0.0.1:43210 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x331c5a65 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:42129 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7bfa5ae7-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data3/current/BP-760380148-172.31.14.131-1689308182979 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1626479540_17 at /127.0.0.1:52148 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1626479540_17 at /127.0.0.1:43190 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp766280167-2594 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215050175-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:33863 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp587218053-2321 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308184527 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data6/current/BP-760380148-172.31.14.131-1689308182979 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x67fab74b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:33863 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62077@0x2b9ac065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_96441579_17 at /127.0.0.1:52158 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 2 on default port 40287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server idle connection scanner for port 35923 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp537693746-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6d04da20-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:33863 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1c50d6e9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp537693746-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:39705-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData-prefix:jenkins-hbase4.apache.org,44421,1689308183695 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@22366487 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@47a516b6[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@33c16c23 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36435,1689308178027 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp587218053-2324 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 35923 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1066233548@qtp-767825892-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x17e40f1d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(1146221845) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp766280167-2592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b-prefix:jenkins-hbase4.apache.org,35659,1689308184169.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:33863 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62077@0x2b9ac065-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35923 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_96441579_17 at /127.0.0.1:34100 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp537693746-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1001287953-2251-acceptor-0@2a2cf15e-ServerConnector@7dbd2218{HTTP/1.1, (http/1.1)}{0.0.0.0:43131} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 38975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x0c6b1b47-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:42129 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data1/current/BP-760380148-172.31.14.131-1689308182979 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7bfa5ae7-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33863 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data5/current/BP-760380148-172.31.14.131-1689308182979 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1626479540_17 at /127.0.0.1:34086 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp557211652-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34751-SendThread(127.0.0.1:62077) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x0c6b1b47-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44421,1689308183695 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Session-HouseKeeper-72f6fde0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 38975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:1;jenkins-hbase4:35253-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp557211652-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_96441579_17 at /127.0.0.1:43220 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1626479540_17 at /127.0.0.1:52162 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35923 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x75e94ef0-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:42129 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 232619060@qtp-1373768995-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data4/current/BP-760380148-172.31.14.131-1689308182979 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1615391675_17 at /127.0.0.1:43164 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x17e40f1d-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:42129 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_845730030_17 at /127.0.0.1:34096 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 1 on default port 38975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-50e3498a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_96441579_17 at /127.0.0.1:34112 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1001287953-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1001287953-2250 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/38975.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS:1;jenkins-hbase4:35253 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp557211652-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ProcessThread(sid:0 cport:62981): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x67397cdc-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp215050175-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xa29c91c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6ba4dce4-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x331c5a65-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1df90345 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp537693746-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp557211652-2220-acceptor-0@267fc865-ServerConnector@65704643{HTTP/1.1, (http/1.1)}{0.0.0.0:41355} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:33863 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1001287953-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215050175-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:46197 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp215050175-2311-acceptor-0@1e3427b0-ServerConnector@352219ef{HTTP/1.1, (http/1.1)}{0.0.0.0:44343} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp557211652-2219 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/38975-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 906656909@qtp-925950033-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40633 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_845730030_17 at /127.0.0.1:52152 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x066dfb79-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp587218053-2326 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:44421 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40287 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp766280167-2596 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x7bfa5ae7-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/38975.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_845730030_17 at /127.0.0.1:43194 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/38975-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34751-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp587218053-2328 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 35923 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33863 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp537693746-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:42129 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: jenkins-hbase4:35253Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b-prefix:jenkins-hbase4.apache.org,39705,1689308183864 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x67fab74b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1615391675_17 at /127.0.0.1:34066 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp537693746-2281-acceptor-0@14a472fe-ServerConnector@4cdfbf9d{HTTP/1.1, (http/1.1)}{0.0.0.0:42433} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@22bbfe23 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35659Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:62981 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: jenkins-hbase4:39705Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_845730030_17 at /127.0.0.1:43128 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@520b79d4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 38975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 40287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp557211652-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@5e1ad929 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp766280167-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@300b03e java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@3124a64 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:42129 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_96441579_17 at /127.0.0.1:52172 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7bfa5ae7-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp587218053-2327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-134f4aae-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1615391675_17 at /127.0.0.1:52124 [Receiving block BP-760380148-172.31.14.131-1689308182979:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35659 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp587218053-2323 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308184527 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@26c6a963[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1352665868) connection to localhost/127.0.0.1:42129 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62077@0x2b9ac065-SendThread(127.0.0.1:62077) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:228) org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp215050175-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x75e94ef0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp215050175-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38975-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x67397cdc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server idle connection scanner for port 38975 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp766280167-2593 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp766280167-2589 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1056818915.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x066dfb79-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data2/current/BP-760380148-172.31.14.131-1689308182979 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp557211652-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b-prefix:jenkins-hbase4.apache.org,35253,1689308184021 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x17e40f1d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp557211652-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1001287953-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b-prefix:jenkins-hbase4.apache.org,35659,1689308184169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1001287953-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:33863 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1001287953-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44421 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp766280167-2590-acceptor-0@4f72ba68-ServerConnector@2014f237{HTTP/1.1, (http/1.1)}{0.0.0.0:40627} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x066dfb79 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1440691410.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 2028232482@qtp-1063900309-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45007 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: 819445533@qtp-925950033-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 2 on default port 33863 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-760380148-172.31.14.131-1689308182979:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35659-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 38975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:0;jenkins-hbase4:39705 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 35923 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@7057b85a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp537693746-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62981@0x331c5a65-SendThread(127.0.0.1:62981) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x65064230-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=831 (was 798) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=532 (was 532), ProcessCount=172 (was 172), AvailableMemoryMB=3654 (was 3923) 2023-07-14 04:16:25,855 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-14 04:16:25,875 INFO [Listener at localhost/38975] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=562, OpenFileDescriptor=829, MaxFileDescriptor=60000, SystemLoadAverage=532, ProcessCount=172, AvailableMemoryMB=3652 2023-07-14 04:16:25,875 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=562 is superior to 500 2023-07-14 04:16:25,875 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-14 04:16:25,876 INFO [RS:3;jenkins-hbase4:46197] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46197%2C1689308185572, suffix=, logDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,46197,1689308185572, archiveDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs, maxLogs=32 2023-07-14 04:16:25,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:25,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:25,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:25,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:25,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:25,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:25,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:25,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:25,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:25,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:25,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:25,895 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:25,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:25,902 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK] 2023-07-14 04:16:25,902 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK] 2023-07-14 04:16:25,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:25,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:25,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:25,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:25,907 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK] 2023-07-14 04:16:25,909 INFO [RS:3;jenkins-hbase4:46197] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/WALs/jenkins-hbase4.apache.org,46197,1689308185572/jenkins-hbase4.apache.org%2C46197%2C1689308185572.1689308185876 2023-07-14 04:16:25,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:25,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:25,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:25,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:25,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36124 deadline: 1689309385912, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:25,913 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:25,914 DEBUG [RS:3;jenkins-hbase4:46197] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33459,DS-16893484-af6d-4fa6-820c-6ad960fc5775,DISK], DatanodeInfoWithStorage[127.0.0.1:46789,DS-ecc455eb-0975-4a9c-937f-98cf061fa274,DISK], DatanodeInfoWithStorage[127.0.0.1:45853,DS-e787a120-6f7b-4dce-bbd9-43f1468e3969,DISK]] 2023-07-14 04:16:25,915 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:25,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:25,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:25,916 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:25,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:25,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:25,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:25,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-14 04:16:25,921 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:25,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-14 04:16:25,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-14 04:16:25,923 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:25,924 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:25,924 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:25,926 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-14 04:16:25,928 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:25,928 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9 empty. 2023-07-14 04:16:25,929 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:25,929 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-14 04:16:25,941 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-14 04:16:25,942 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => ba7b37db96f6a07a6076cf2f5acb70f9, NAME => 't1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp 2023-07-14 04:16:25,950 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:25,950 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing ba7b37db96f6a07a6076cf2f5acb70f9, disabling compactions & flushes 2023-07-14 04:16:25,950 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:25,951 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:25,951 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. after waiting 0 ms 2023-07-14 04:16:25,951 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:25,951 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:25,951 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for ba7b37db96f6a07a6076cf2f5acb70f9: 2023-07-14 04:16:25,953 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-14 04:16:25,954 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308185954"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308185954"}]},"ts":"1689308185954"} 2023-07-14 04:16:25,955 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-14 04:16:25,956 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-14 04:16:25,956 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308185956"}]},"ts":"1689308185956"} 2023-07-14 04:16:25,957 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-14 04:16:25,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-14 04:16:25,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-14 04:16:25,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-14 04:16:25,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-14 04:16:25,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-14 04:16:25,960 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-14 04:16:25,961 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=ba7b37db96f6a07a6076cf2f5acb70f9, ASSIGN}] 2023-07-14 04:16:25,961 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=ba7b37db96f6a07a6076cf2f5acb70f9, ASSIGN 2023-07-14 04:16:25,962 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=ba7b37db96f6a07a6076cf2f5acb70f9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39705,1689308183864; forceNewPlan=false, retain=false 2023-07-14 04:16:26,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-14 04:16:26,113 INFO [jenkins-hbase4:44421] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-14 04:16:26,114 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=ba7b37db96f6a07a6076cf2f5acb70f9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:26,114 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308186114"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308186114"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308186114"}]},"ts":"1689308186114"} 2023-07-14 04:16:26,116 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure ba7b37db96f6a07a6076cf2f5acb70f9, server=jenkins-hbase4.apache.org,39705,1689308183864}] 2023-07-14 04:16:26,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-14 04:16:26,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:26,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba7b37db96f6a07a6076cf2f5acb70f9, NAME => 't1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.', STARTKEY => '', ENDKEY => ''} 2023-07-14 04:16:26,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-14 04:16:26,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,274 INFO [StoreOpener-ba7b37db96f6a07a6076cf2f5acb70f9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,275 DEBUG [StoreOpener-ba7b37db96f6a07a6076cf2f5acb70f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9/cf1 2023-07-14 04:16:26,275 DEBUG [StoreOpener-ba7b37db96f6a07a6076cf2f5acb70f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9/cf1 2023-07-14 04:16:26,276 INFO [StoreOpener-ba7b37db96f6a07a6076cf2f5acb70f9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba7b37db96f6a07a6076cf2f5acb70f9 columnFamilyName cf1 2023-07-14 04:16:26,276 INFO [StoreOpener-ba7b37db96f6a07a6076cf2f5acb70f9-1] regionserver.HStore(310): Store=ba7b37db96f6a07a6076cf2f5acb70f9/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-14 04:16:26,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-14 04:16:26,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba7b37db96f6a07a6076cf2f5acb70f9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9575580960, jitterRate=-0.10820452868938446}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-14 04:16:26,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba7b37db96f6a07a6076cf2f5acb70f9: 2023-07-14 04:16:26,289 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9., pid=14, masterSystemTime=1689308186268 2023-07-14 04:16:26,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:26,290 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:26,291 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=ba7b37db96f6a07a6076cf2f5acb70f9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:26,291 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308186291"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689308186291"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689308186291"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689308186291"}]},"ts":"1689308186291"} 2023-07-14 04:16:26,294 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-14 04:16:26,294 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure ba7b37db96f6a07a6076cf2f5acb70f9, server=jenkins-hbase4.apache.org,39705,1689308183864 in 176 msec 2023-07-14 04:16:26,302 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-14 04:16:26,302 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=ba7b37db96f6a07a6076cf2f5acb70f9, ASSIGN in 333 msec 2023-07-14 04:16:26,303 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-14 04:16:26,303 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308186303"}]},"ts":"1689308186303"} 2023-07-14 04:16:26,304 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-14 04:16:26,307 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-14 04:16:26,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 388 msec 2023-07-14 04:16:26,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-14 04:16:26,526 INFO [Listener at localhost/38975] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-14 04:16:26,526 DEBUG [Listener at localhost/38975] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-14 04:16:26,527 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:26,529 INFO [Listener at localhost/38975] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-14 04:16:26,529 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:26,529 INFO [Listener at localhost/38975] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-14 04:16:26,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-14 04:16:26,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-14 04:16:26,533 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-14 04:16:26,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-14 04:16:26,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:36124 deadline: 1689308246530, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-14 04:16:26,535 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:26,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-14 04:16:26,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:26,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:26,637 INFO [Listener at localhost/38975] client.HBaseAdmin$15(890): Started disable of t1 2023-07-14 04:16:26,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-14 04:16:26,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-14 04:16:26,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 04:16:26,641 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308186641"}]},"ts":"1689308186641"} 2023-07-14 04:16:26,642 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-14 04:16:26,643 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-14 04:16:26,644 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=ba7b37db96f6a07a6076cf2f5acb70f9, UNASSIGN}] 2023-07-14 04:16:26,644 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=ba7b37db96f6a07a6076cf2f5acb70f9, UNASSIGN 2023-07-14 04:16:26,645 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=ba7b37db96f6a07a6076cf2f5acb70f9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:26,645 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308186645"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689308186645"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689308186645"}]},"ts":"1689308186645"} 2023-07-14 04:16:26,646 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure ba7b37db96f6a07a6076cf2f5acb70f9, server=jenkins-hbase4.apache.org,39705,1689308183864}] 2023-07-14 04:16:26,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 04:16:26,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba7b37db96f6a07a6076cf2f5acb70f9, disabling compactions & flushes 2023-07-14 04:16:26,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:26,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:26,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. after waiting 0 ms 2023-07-14 04:16:26,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:26,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-14 04:16:26,802 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9. 2023-07-14 04:16:26,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba7b37db96f6a07a6076cf2f5acb70f9: 2023-07-14 04:16:26,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,804 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=ba7b37db96f6a07a6076cf2f5acb70f9, regionState=CLOSED 2023-07-14 04:16:26,804 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689308186804"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689308186804"}]},"ts":"1689308186804"} 2023-07-14 04:16:26,806 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-14 04:16:26,806 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure ba7b37db96f6a07a6076cf2f5acb70f9, server=jenkins-hbase4.apache.org,39705,1689308183864 in 159 msec 2023-07-14 04:16:26,808 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-14 04:16:26,808 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=ba7b37db96f6a07a6076cf2f5acb70f9, UNASSIGN in 162 msec 2023-07-14 04:16:26,808 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689308186808"}]},"ts":"1689308186808"} 2023-07-14 04:16:26,809 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-14 04:16:26,810 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 04:16:26,810 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-14 04:16:26,810 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:16:26,810 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-14 04:16:26,810 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 04:16:26,810 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-14 04:16:26,811 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-14 04:16:26,812 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 174 msec 2023-07-14 04:16:26,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-14 04:16:26,942 INFO [Listener at localhost/38975] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-14 04:16:26,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-14 04:16:26,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-14 04:16:26,946 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-14 04:16:26,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-14 04:16:26,947 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-14 04:16:26,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:26,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:26,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:26,950 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,951 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9/cf1, FileablePath, hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9/recovered.edits] 2023-07-14 04:16:26,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-14 04:16:26,957 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9/recovered.edits/4.seqid to hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/archive/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9/recovered.edits/4.seqid 2023-07-14 04:16:26,957 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/.tmp/data/default/t1/ba7b37db96f6a07a6076cf2f5acb70f9 2023-07-14 04:16:26,957 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-14 04:16:26,959 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-14 04:16:26,961 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-14 04:16:26,962 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-14 04:16:26,963 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-14 04:16:26,963 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-14 04:16:26,963 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689308186963"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:26,964 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-14 04:16:26,964 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ba7b37db96f6a07a6076cf2f5acb70f9, NAME => 't1,,1689308185918.ba7b37db96f6a07a6076cf2f5acb70f9.', STARTKEY => '', ENDKEY => ''}] 2023-07-14 04:16:26,964 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-14 04:16:26,964 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689308186964"}]},"ts":"9223372036854775807"} 2023-07-14 04:16:26,965 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-14 04:16:26,967 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-14 04:16:26,968 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 24 msec 2023-07-14 04:16:27,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-14 04:16:27,056 INFO [Listener at localhost/38975] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-14 04:16:27,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:27,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:27,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:27,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:27,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:27,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:27,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:27,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,072 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:27,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:27,074 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:27,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:27,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36124 deadline: 1689309387082, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:27,083 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:27,086 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:27,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,087 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:27,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:27,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:27,109 INFO [Listener at localhost/38975] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 562) - Thread LEAK? -, OpenFileDescriptor=839 (was 829) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=532 (was 532), ProcessCount=172 (was 172), AvailableMemoryMB=3688 (was 3652) - AvailableMemoryMB LEAK? - 2023-07-14 04:16:27,109 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-14 04:16:27,129 INFO [Listener at localhost/38975] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=532, ProcessCount=172, AvailableMemoryMB=3685 2023-07-14 04:16:27,129 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-14 04:16:27,130 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-14 04:16:27,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:27,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:27,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:27,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:27,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:27,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:27,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:27,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,145 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:27,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:27,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:27,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:27,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36124 deadline: 1689309387156, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:27,157 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:27,159 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:27,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,160 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:27,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:27,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:27,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-14 04:16:27,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:27,163 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-14 04:16:27,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-14 04:16:27,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-14 04:16:27,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:27,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:27,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:27,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:27,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:27,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:27,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:27,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,182 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:27,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:27,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:27,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:27,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36124 deadline: 1689309387193, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:27,194 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:27,196 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:27,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,196 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:27,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:27,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:27,216 INFO [Listener at localhost/38975] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 573) - Thread LEAK? -, OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=532 (was 532), ProcessCount=172 (was 172), AvailableMemoryMB=3685 (was 3685) 2023-07-14 04:16:27,216 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-14 04:16:27,236 INFO [Listener at localhost/38975] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=532, ProcessCount=172, AvailableMemoryMB=3685 2023-07-14 04:16:27,236 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-14 04:16:27,236 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-14 04:16:27,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:27,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:27,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:27,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:27,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:27,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:27,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:27,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,249 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:27,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:27,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:27,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:27,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36124 deadline: 1689309387258, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:27,259 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:27,261 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:27,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,262 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:27,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:27,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:27,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:27,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:27,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:27,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:27,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:27,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:27,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:27,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,279 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:27,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:27,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:27,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:27,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36124 deadline: 1689309387288, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:27,289 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:27,291 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:27,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,292 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:27,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:27,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:27,309 INFO [Listener at localhost/38975] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=532 (was 532), ProcessCount=172 (was 172), AvailableMemoryMB=3685 (was 3685) 2023-07-14 04:16:27,309 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-14 04:16:27,326 INFO [Listener at localhost/38975] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=532, ProcessCount=172, AvailableMemoryMB=3684 2023-07-14 04:16:27,326 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-14 04:16:27,326 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-14 04:16:27,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:27,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:27,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:27,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:27,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:27,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:27,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:27,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,338 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:27,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:27,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:27,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:27,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36124 deadline: 1689309387347, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:27,348 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:27,349 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:27,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,350 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:27,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:27,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:27,351 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-14 04:16:27,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-14 04:16:27,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-14 04:16:27,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-14 04:16:27,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-14 04:16:27,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-14 04:16:27,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 04:16:27,367 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:27,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-14 04:16:27,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-14 04:16:27,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-14 04:16:27,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:36124 deadline: 1689309387465, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-14 04:16:27,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-14 04:16:27,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-14 04:16:27,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-14 04:16:27,485 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-14 04:16:27,486 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-14 04:16:27,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-14 04:16:27,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-14 04:16:27,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-14 04:16:27,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-14 04:16:27,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-14 04:16:27,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-14 04:16:27,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 04:16:27,602 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 04:16:27,604 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 04:16:27,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-14 04:16:27,608 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 04:16:27,609 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-14 04:16:27,609 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-14 04:16:27,609 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 04:16:27,611 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-14 04:16:27,612 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-14 04:16:27,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-14 04:16:27,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-14 04:16:27,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-14 04:16:27,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-14 04:16:27,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:36124 deadline: 1689308247717, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-14 04:16:27,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:27,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:27,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:27,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:27,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:27,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-14 04:16:27,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-14 04:16:27,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-14 04:16:27,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-14 04:16:27,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-14 04:16:27,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-14 04:16:27,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-14 04:16:27,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-14 04:16:27,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-14 04:16:27,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-14 04:16:27,735 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-14 04:16:27,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-14 04:16:27,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-14 04:16:27,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-14 04:16:27,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-14 04:16:27,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-14 04:16:27,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44421] to rsgroup master 2023-07-14 04:16:27,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-14 04:16:27,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36124 deadline: 1689309387745, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. 2023-07-14 04:16:27,745 WARN [Listener at localhost/38975] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44421 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-14 04:16:27,747 INFO [Listener at localhost/38975] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-14 04:16:27,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-14 04:16:27,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-14 04:16:27,748 INFO [Listener at localhost/38975] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35253, jenkins-hbase4.apache.org:35659, jenkins-hbase4.apache.org:39705, jenkins-hbase4.apache.org:46197], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-14 04:16:27,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-14 04:16:27,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44421] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-14 04:16:27,765 INFO [Listener at localhost/38975] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=513 (was 532), ProcessCount=170 (was 172), AvailableMemoryMB=5674 (was 3684) - AvailableMemoryMB LEAK? - 2023-07-14 04:16:27,765 WARN [Listener at localhost/38975] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-14 04:16:27,765 INFO [Listener at localhost/38975] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-14 04:16:27,765 INFO [Listener at localhost/38975] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-14 04:16:27,765 DEBUG [Listener at localhost/38975] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x17e40f1d to 127.0.0.1:62981 2023-07-14 04:16:27,765 DEBUG [Listener at localhost/38975] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,765 DEBUG [Listener at localhost/38975] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-14 04:16:27,766 DEBUG [Listener at localhost/38975] util.JVMClusterUtil(257): Found active master hash=875071486, stopped=false 2023-07-14 04:16:27,766 DEBUG [Listener at localhost/38975] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-14 04:16:27,766 DEBUG [Listener at localhost/38975] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-14 04:16:27,766 INFO [Listener at localhost/38975] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:27,767 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:27,767 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:27,767 INFO [Listener at localhost/38975] procedure2.ProcedureExecutor(629): Stopping 2023-07-14 04:16:27,767 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:27,767 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:27,767 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:27,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:27,767 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-14 04:16:27,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:27,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:27,768 DEBUG [Listener at localhost/38975] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x066dfb79 to 127.0.0.1:62981 2023-07-14 04:16:27,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:27,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-14 04:16:27,769 DEBUG [Listener at localhost/38975] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,769 INFO [Listener at localhost/38975] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39705,1689308183864' ***** 2023-07-14 04:16:27,769 INFO [Listener at localhost/38975] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:27,769 INFO [Listener at localhost/38975] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35253,1689308184021' ***** 2023-07-14 04:16:27,769 INFO [Listener at localhost/38975] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:27,769 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:27,769 INFO [Listener at localhost/38975] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35659,1689308184169' ***** 2023-07-14 04:16:27,769 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:27,769 INFO [Listener at localhost/38975] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:27,771 INFO [Listener at localhost/38975] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46197,1689308185572' ***** 2023-07-14 04:16:27,771 INFO [Listener at localhost/38975] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-14 04:16:27,771 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:27,771 INFO [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:27,777 INFO [RS:1;jenkins-hbase4:35253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3a3201c6{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:27,777 INFO [RS:0;jenkins-hbase4:39705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@54c5488a{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:27,778 INFO [RS:3;jenkins-hbase4:46197] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6387f4d5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:27,778 INFO [RS:1;jenkins-hbase4:35253] server.AbstractConnector(383): Stopped ServerConnector@4cdfbf9d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:27,778 INFO [RS:2;jenkins-hbase4:35659] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2ad0ad4c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-14 04:16:27,778 INFO [RS:1;jenkins-hbase4:35253] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:27,778 INFO [RS:3;jenkins-hbase4:46197] server.AbstractConnector(383): Stopped ServerConnector@2014f237{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:27,778 INFO [RS:0;jenkins-hbase4:39705] server.AbstractConnector(383): Stopped ServerConnector@7dbd2218{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:27,779 INFO [RS:1;jenkins-hbase4:35253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@642793fd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:27,779 INFO [RS:0;jenkins-hbase4:39705] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:27,779 INFO [RS:2;jenkins-hbase4:35659] server.AbstractConnector(383): Stopped ServerConnector@352219ef{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:27,779 INFO [RS:3;jenkins-hbase4:46197] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:27,779 INFO [RS:2;jenkins-hbase4:35659] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:27,779 INFO [RS:1;jenkins-hbase4:35253] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1989f106{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:27,781 INFO [RS:3;jenkins-hbase4:46197] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@72d59f89{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:27,780 INFO [RS:0;jenkins-hbase4:39705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@a0a1c5c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:27,782 INFO [RS:2;jenkins-hbase4:35659] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@179e7414{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:27,783 INFO [RS:0;jenkins-hbase4:39705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@782404d1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:27,784 INFO [RS:1;jenkins-hbase4:35253] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:27,784 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:27,784 INFO [RS:1;jenkins-hbase4:35253] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:27,784 INFO [RS:1;jenkins-hbase4:35253] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:27,783 INFO [RS:3;jenkins-hbase4:46197] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10a95ce4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:27,785 INFO [RS:2;jenkins-hbase4:35659] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c6daf69{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:27,784 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(3305): Received CLOSE for 4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:27,785 INFO [RS:0;jenkins-hbase4:39705] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:27,786 INFO [RS:3;jenkins-hbase4:46197] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:27,786 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:27,786 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:27,786 INFO [RS:0;jenkins-hbase4:39705] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:27,786 INFO [RS:3;jenkins-hbase4:46197] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:27,786 INFO [RS:0;jenkins-hbase4:39705] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:27,786 INFO [RS:3;jenkins-hbase4:46197] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:27,786 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(3305): Received CLOSE for 768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:27,786 INFO [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:27,787 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:27,787 DEBUG [RS:3;jenkins-hbase4:46197] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x331c5a65 to 127.0.0.1:62981 2023-07-14 04:16:27,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 768a0e28f09d1bbbf09bf2c25810f971, disabling compactions & flushes 2023-07-14 04:16:27,787 DEBUG [RS:0;jenkins-hbase4:39705] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x67fab74b to 127.0.0.1:62981 2023-07-14 04:16:27,787 DEBUG [RS:0;jenkins-hbase4:39705] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:27,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:27,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. after waiting 0 ms 2023-07-14 04:16:27,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:27,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 768a0e28f09d1bbbf09bf2c25810f971 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-14 04:16:27,787 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:27,787 DEBUG [RS:3;jenkins-hbase4:46197] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,788 DEBUG [RS:1;jenkins-hbase4:35253] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75e94ef0 to 127.0.0.1:62981 2023-07-14 04:16:27,788 DEBUG [RS:1;jenkins-hbase4:35253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,787 INFO [RS:2;jenkins-hbase4:35659] regionserver.HeapMemoryManager(220): Stopping 2023-07-14 04:16:27,788 INFO [RS:2;jenkins-hbase4:35659] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-14 04:16:27,788 INFO [RS:2;jenkins-hbase4:35659] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-14 04:16:27,788 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:27,788 DEBUG [RS:2;jenkins-hbase4:35659] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x67397cdc to 127.0.0.1:62981 2023-07-14 04:16:27,788 DEBUG [RS:2;jenkins-hbase4:35659] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,788 INFO [RS:2;jenkins-hbase4:35659] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:27,788 INFO [RS:2;jenkins-hbase4:35659] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:27,788 INFO [RS:2;jenkins-hbase4:35659] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:27,788 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-14 04:16:27,788 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-14 04:16:27,787 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-14 04:16:27,789 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-14 04:16:27,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4df932c55bcbb5ec85af558c057d2606, disabling compactions & flushes 2023-07-14 04:16:27,788 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-14 04:16:27,788 INFO [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46197,1689308185572; all regions closed. 2023-07-14 04:16:27,789 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1478): Online Regions={4df932c55bcbb5ec85af558c057d2606=hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606.} 2023-07-14 04:16:27,789 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-14 04:16:27,789 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:27,789 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:27,789 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-14 04:16:27,789 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1478): Online Regions={768a0e28f09d1bbbf09bf2c25810f971=hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971.} 2023-07-14 04:16:27,789 DEBUG [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-14 04:16:27,789 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. after waiting 0 ms 2023-07-14 04:16:27,789 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-14 04:16:27,789 DEBUG [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1504): Waiting on 4df932c55bcbb5ec85af558c057d2606 2023-07-14 04:16:27,790 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-14 04:16:27,790 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-14 04:16:27,790 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-14 04:16:27,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:27,789 DEBUG [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1504): Waiting on 768a0e28f09d1bbbf09bf2c25810f971 2023-07-14 04:16:27,790 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-14 04:16:27,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 4df932c55bcbb5ec85af558c057d2606 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-14 04:16:27,795 DEBUG [RS:3;jenkins-hbase4:46197] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs 2023-07-14 04:16:27,795 INFO [RS:3;jenkins-hbase4:46197] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46197%2C1689308185572:(num 1689308185876) 2023-07-14 04:16:27,795 DEBUG [RS:3;jenkins-hbase4:46197] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,795 INFO [RS:3;jenkins-hbase4:46197] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:27,797 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:27,803 INFO [RS:3;jenkins-hbase4:46197] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:27,803 INFO [RS:3;jenkins-hbase4:46197] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:27,803 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:27,803 INFO [RS:3;jenkins-hbase4:46197] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:27,803 INFO [RS:3;jenkins-hbase4:46197] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:27,804 INFO [RS:3;jenkins-hbase4:46197] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46197 2023-07-14 04:16:27,804 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:27,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606/.tmp/info/2f8628c4644845ce9e023ea7129748e3 2023-07-14 04:16:27,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971/.tmp/m/416dc351f8ad42ef9311c9712a7932c0 2023-07-14 04:16:27,831 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/.tmp/info/1debc58047534578b1bcd4fd16a36e0a 2023-07-14 04:16:27,834 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2f8628c4644845ce9e023ea7129748e3 2023-07-14 04:16:27,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606/.tmp/info/2f8628c4644845ce9e023ea7129748e3 as hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606/info/2f8628c4644845ce9e023ea7129748e3 2023-07-14 04:16:27,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 416dc351f8ad42ef9311c9712a7932c0 2023-07-14 04:16:27,838 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1debc58047534578b1bcd4fd16a36e0a 2023-07-14 04:16:27,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971/.tmp/m/416dc351f8ad42ef9311c9712a7932c0 as hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971/m/416dc351f8ad42ef9311c9712a7932c0 2023-07-14 04:16:27,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2f8628c4644845ce9e023ea7129748e3 2023-07-14 04:16:27,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606/info/2f8628c4644845ce9e023ea7129748e3, entries=3, sequenceid=9, filesize=5.0 K 2023-07-14 04:16:27,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 4df932c55bcbb5ec85af558c057d2606 in 53ms, sequenceid=9, compaction requested=false 2023-07-14 04:16:27,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 416dc351f8ad42ef9311c9712a7932c0 2023-07-14 04:16:27,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971/m/416dc351f8ad42ef9311c9712a7932c0, entries=12, sequenceid=29, filesize=5.4 K 2023-07-14 04:16:27,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 768a0e28f09d1bbbf09bf2c25810f971 in 58ms, sequenceid=29, compaction requested=false 2023-07-14 04:16:27,853 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:27,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/rsgroup/768a0e28f09d1bbbf09bf2c25810f971/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-14 04:16:27,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:16:27,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:27,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 768a0e28f09d1bbbf09bf2c25810f971: 2023-07-14 04:16:27,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689308185114.768a0e28f09d1bbbf09bf2c25810f971. 2023-07-14 04:16:27,863 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/.tmp/rep_barrier/53a0ca31d0a5415dbadbafea68112870 2023-07-14 04:16:27,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/namespace/4df932c55bcbb5ec85af558c057d2606/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-14 04:16:27,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:27,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4df932c55bcbb5ec85af558c057d2606: 2023-07-14 04:16:27,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689308185120.4df932c55bcbb5ec85af558c057d2606. 2023-07-14 04:16:27,865 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:27,868 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53a0ca31d0a5415dbadbafea68112870 2023-07-14 04:16:27,883 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/.tmp/table/6b0df6c89a7d4cc18cf74498f5084288 2023-07-14 04:16:27,888 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6b0df6c89a7d4cc18cf74498f5084288 2023-07-14 04:16:27,889 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/.tmp/info/1debc58047534578b1bcd4fd16a36e0a as hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/info/1debc58047534578b1bcd4fd16a36e0a 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46197,1689308185572 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:27,891 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:27,892 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46197,1689308185572] 2023-07-14 04:16:27,892 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46197,1689308185572; numProcessing=1 2023-07-14 04:16:27,894 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1debc58047534578b1bcd4fd16a36e0a 2023-07-14 04:16:27,894 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/info/1debc58047534578b1bcd4fd16a36e0a, entries=22, sequenceid=26, filesize=7.3 K 2023-07-14 04:16:27,895 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46197,1689308185572 already deleted, retry=false 2023-07-14 04:16:27,895 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46197,1689308185572 expired; onlineServers=3 2023-07-14 04:16:27,895 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/.tmp/rep_barrier/53a0ca31d0a5415dbadbafea68112870 as hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/rep_barrier/53a0ca31d0a5415dbadbafea68112870 2023-07-14 04:16:27,900 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 53a0ca31d0a5415dbadbafea68112870 2023-07-14 04:16:27,900 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/rep_barrier/53a0ca31d0a5415dbadbafea68112870, entries=1, sequenceid=26, filesize=4.9 K 2023-07-14 04:16:27,901 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/.tmp/table/6b0df6c89a7d4cc18cf74498f5084288 as hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/table/6b0df6c89a7d4cc18cf74498f5084288 2023-07-14 04:16:27,905 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6b0df6c89a7d4cc18cf74498f5084288 2023-07-14 04:16:27,905 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/table/6b0df6c89a7d4cc18cf74498f5084288, entries=6, sequenceid=26, filesize=5.1 K 2023-07-14 04:16:27,906 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 116ms, sequenceid=26, compaction requested=false 2023-07-14 04:16:27,915 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-14 04:16:27,916 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-14 04:16:27,916 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-14 04:16:27,916 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-14 04:16:27,916 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-14 04:16:27,990 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35659,1689308184169; all regions closed. 2023-07-14 04:16:27,990 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35253,1689308184021; all regions closed. 2023-07-14 04:16:27,990 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39705,1689308183864; all regions closed. 2023-07-14 04:16:27,997 DEBUG [RS:1;jenkins-hbase4:35253] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs 2023-07-14 04:16:27,997 DEBUG [RS:2;jenkins-hbase4:35659] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs 2023-07-14 04:16:27,997 INFO [RS:1;jenkins-hbase4:35253] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35253%2C1689308184021:(num 1689308184836) 2023-07-14 04:16:27,997 DEBUG [RS:1;jenkins-hbase4:35253] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,997 INFO [RS:2;jenkins-hbase4:35659] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35659%2C1689308184169.meta:.meta(num 1689308185047) 2023-07-14 04:16:27,997 INFO [RS:1;jenkins-hbase4:35253] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:27,997 INFO [RS:1;jenkins-hbase4:35253] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:27,997 INFO [RS:1;jenkins-hbase4:35253] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:27,997 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:27,997 INFO [RS:1;jenkins-hbase4:35253] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:27,997 INFO [RS:1;jenkins-hbase4:35253] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:27,997 DEBUG [RS:0;jenkins-hbase4:39705] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs 2023-07-14 04:16:27,998 INFO [RS:0;jenkins-hbase4:39705] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39705%2C1689308183864:(num 1689308184811) 2023-07-14 04:16:27,998 DEBUG [RS:0;jenkins-hbase4:39705] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:27,998 INFO [RS:0;jenkins-hbase4:39705] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:27,999 INFO [RS:1;jenkins-hbase4:35253] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35253 2023-07-14 04:16:27,999 INFO [RS:0;jenkins-hbase4:39705] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:27,999 INFO [RS:0;jenkins-hbase4:39705] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-14 04:16:27,999 INFO [RS:0;jenkins-hbase4:39705] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-14 04:16:27,999 INFO [RS:0;jenkins-hbase4:39705] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-14 04:16:27,999 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:28,001 INFO [RS:0;jenkins-hbase4:39705] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39705 2023-07-14 04:16:28,001 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:28,001 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:28,001 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:28,001 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35253,1689308184021 2023-07-14 04:16:28,003 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:28,003 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39705,1689308183864 2023-07-14 04:16:28,004 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35253,1689308184021] 2023-07-14 04:16:28,004 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35253,1689308184021; numProcessing=2 2023-07-14 04:16:28,004 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:28,006 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35253,1689308184021 already deleted, retry=false 2023-07-14 04:16:28,006 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35253,1689308184021 expired; onlineServers=2 2023-07-14 04:16:28,006 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39705,1689308183864] 2023-07-14 04:16:28,006 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39705,1689308183864; numProcessing=3 2023-07-14 04:16:28,006 DEBUG [RS:2;jenkins-hbase4:35659] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/oldWALs 2023-07-14 04:16:28,006 INFO [RS:2;jenkins-hbase4:35659] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35659%2C1689308184169:(num 1689308184842) 2023-07-14 04:16:28,006 DEBUG [RS:2;jenkins-hbase4:35659] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:28,007 INFO [RS:2;jenkins-hbase4:35659] regionserver.LeaseManager(133): Closed leases 2023-07-14 04:16:28,007 INFO [RS:2;jenkins-hbase4:35659] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-14 04:16:28,007 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:28,008 INFO [RS:2;jenkins-hbase4:35659] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35659 2023-07-14 04:16:28,008 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39705,1689308183864 already deleted, retry=false 2023-07-14 04:16:28,008 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39705,1689308183864 expired; onlineServers=1 2023-07-14 04:16:28,009 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35659,1689308184169 2023-07-14 04:16:28,009 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-14 04:16:28,010 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35659,1689308184169] 2023-07-14 04:16:28,010 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35659,1689308184169; numProcessing=4 2023-07-14 04:16:28,011 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35659,1689308184169 already deleted, retry=false 2023-07-14 04:16:28,011 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35659,1689308184169 expired; onlineServers=0 2023-07-14 04:16:28,011 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44421,1689308183695' ***** 2023-07-14 04:16:28,012 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-14 04:16:28,012 DEBUG [M:0;jenkins-hbase4:44421] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@301705a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-14 04:16:28,012 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-14 04:16:28,014 INFO [M:0;jenkins-hbase4:44421] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7b38f305{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-14 04:16:28,015 INFO [M:0;jenkins-hbase4:44421] server.AbstractConnector(383): Stopped ServerConnector@65704643{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:28,015 INFO [M:0;jenkins-hbase4:44421] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-14 04:16:28,016 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-14 04:16:28,016 INFO [M:0;jenkins-hbase4:44421] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@eb3294d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-14 04:16:28,016 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-14 04:16:28,016 INFO [M:0;jenkins-hbase4:44421] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64e35faa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/hadoop.log.dir/,STOPPED} 2023-07-14 04:16:28,017 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-14 04:16:28,017 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44421,1689308183695 2023-07-14 04:16:28,017 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44421,1689308183695; all regions closed. 2023-07-14 04:16:28,017 DEBUG [M:0;jenkins-hbase4:44421] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-14 04:16:28,017 INFO [M:0;jenkins-hbase4:44421] master.HMaster(1491): Stopping master jetty server 2023-07-14 04:16:28,017 INFO [M:0;jenkins-hbase4:44421] server.AbstractConnector(383): Stopped ServerConnector@2dc7c695{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-14 04:16:28,018 DEBUG [M:0;jenkins-hbase4:44421] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-14 04:16:28,018 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-14 04:16:28,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308184527] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689308184527,5,FailOnTimeoutGroup] 2023-07-14 04:16:28,018 DEBUG [M:0;jenkins-hbase4:44421] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-14 04:16:28,018 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308184527] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689308184527,5,FailOnTimeoutGroup] 2023-07-14 04:16:28,018 INFO [M:0;jenkins-hbase4:44421] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-14 04:16:28,018 INFO [M:0;jenkins-hbase4:44421] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-14 04:16:28,018 INFO [M:0;jenkins-hbase4:44421] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-14 04:16:28,018 DEBUG [M:0;jenkins-hbase4:44421] master.HMaster(1512): Stopping service threads 2023-07-14 04:16:28,018 INFO [M:0;jenkins-hbase4:44421] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-14 04:16:28,018 ERROR [M:0;jenkins-hbase4:44421] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-14 04:16:28,019 INFO [M:0;jenkins-hbase4:44421] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-14 04:16:28,019 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-14 04:16:28,019 DEBUG [M:0;jenkins-hbase4:44421] zookeeper.ZKUtil(398): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-14 04:16:28,019 WARN [M:0;jenkins-hbase4:44421] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-14 04:16:28,019 INFO [M:0;jenkins-hbase4:44421] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-14 04:16:28,019 INFO [M:0;jenkins-hbase4:44421] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-14 04:16:28,019 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-14 04:16:28,019 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:28,019 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:28,019 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-14 04:16:28,019 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:28,019 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.18 KB heapSize=90.64 KB 2023-07-14 04:16:28,031 INFO [M:0;jenkins-hbase4:44421] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.18 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e32485a2e8414e7ea46ed8f056f779e7 2023-07-14 04:16:28,037 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e32485a2e8414e7ea46ed8f056f779e7 as hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e32485a2e8414e7ea46ed8f056f779e7 2023-07-14 04:16:28,041 INFO [M:0;jenkins-hbase4:44421] regionserver.HStore(1080): Added hdfs://localhost:33863/user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e32485a2e8414e7ea46ed8f056f779e7, entries=22, sequenceid=175, filesize=11.1 K 2023-07-14 04:16:28,042 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegion(2948): Finished flush of dataSize ~76.18 KB/78013, heapSize ~90.63 KB/92800, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=175, compaction requested=false 2023-07-14 04:16:28,044 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-14 04:16:28,044 DEBUG [M:0;jenkins-hbase4:44421] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-14 04:16:28,053 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/ee8e6d40-376e-ebeb-1820-b660d5edb56b/MasterData/WALs/jenkins-hbase4.apache.org,44421,1689308183695/jenkins-hbase4.apache.org%2C44421%2C1689308183695.1689308184416 not finished, retry = 0 2023-07-14 04:16:28,154 INFO [M:0;jenkins-hbase4:44421] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-14 04:16:28,154 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-14 04:16:28,155 INFO [M:0;jenkins-hbase4:44421] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44421 2023-07-14 04:16:28,157 DEBUG [M:0;jenkins-hbase4:44421] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44421,1689308183695 already deleted, retry=false 2023-07-14 04:16:28,368 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,368 INFO [M:0;jenkins-hbase4:44421] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44421,1689308183695; zookeeper connection closed. 2023-07-14 04:16:28,368 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): master:44421-0x101620bbf7a0000, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,468 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,468 INFO [RS:2;jenkins-hbase4:35659] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35659,1689308184169; zookeeper connection closed. 2023-07-14 04:16:28,468 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35659-0x101620bbf7a0003, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,469 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@72eb3816] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@72eb3816 2023-07-14 04:16:28,569 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,569 INFO [RS:0;jenkins-hbase4:39705] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39705,1689308183864; zookeeper connection closed. 2023-07-14 04:16:28,569 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:39705-0x101620bbf7a0001, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,569 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3b98eae9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3b98eae9 2023-07-14 04:16:28,669 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,669 INFO [RS:1;jenkins-hbase4:35253] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35253,1689308184021; zookeeper connection closed. 2023-07-14 04:16:28,669 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:35253-0x101620bbf7a0002, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,669 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5c1d4e50] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5c1d4e50 2023-07-14 04:16:28,769 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,769 INFO [RS:3;jenkins-hbase4:46197] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46197,1689308185572; zookeeper connection closed. 2023-07-14 04:16:28,769 DEBUG [Listener at localhost/38975-EventThread] zookeeper.ZKWatcher(600): regionserver:46197-0x101620bbf7a000b, quorum=127.0.0.1:62981, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-14 04:16:28,770 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@296b4df9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@296b4df9 2023-07-14 04:16:28,770 INFO [Listener at localhost/38975] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-14 04:16:28,770 WARN [Listener at localhost/38975] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:28,773 INFO [Listener at localhost/38975] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:28,876 WARN [BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:28,876 WARN [BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-760380148-172.31.14.131-1689308182979 (Datanode Uuid aeb955c7-9edd-4b44-bf51-695af2ee1119) service to localhost/127.0.0.1:33863 2023-07-14 04:16:28,877 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data5/current/BP-760380148-172.31.14.131-1689308182979] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:28,877 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data6/current/BP-760380148-172.31.14.131-1689308182979] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:28,878 WARN [Listener at localhost/38975] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:28,881 INFO [Listener at localhost/38975] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:28,984 WARN [BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:28,984 WARN [BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-760380148-172.31.14.131-1689308182979 (Datanode Uuid b26285dd-7283-4221-8be6-99e8795826b9) service to localhost/127.0.0.1:33863 2023-07-14 04:16:28,985 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data3/current/BP-760380148-172.31.14.131-1689308182979] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:28,985 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data4/current/BP-760380148-172.31.14.131-1689308182979] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:28,986 WARN [Listener at localhost/38975] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-14 04:16:28,989 INFO [Listener at localhost/38975] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:29,093 WARN [BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-14 04:16:29,093 WARN [BP-760380148-172.31.14.131-1689308182979 heartbeating to localhost/127.0.0.1:33863] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-760380148-172.31.14.131-1689308182979 (Datanode Uuid fc6f153f-2743-48e5-83f4-385a54b0a384) service to localhost/127.0.0.1:33863 2023-07-14 04:16:29,094 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data1/current/BP-760380148-172.31.14.131-1689308182979] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:29,094 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/a29f8423-8324-308a-5db5-8a0d95bac8e2/cluster_3cac0887-8e5f-5f28-05d9-a391ee909a38/dfs/data/data2/current/BP-760380148-172.31.14.131-1689308182979] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-14 04:16:29,106 INFO [Listener at localhost/38975] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-14 04:16:29,223 INFO [Listener at localhost/38975] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-14 04:16:29,264 INFO [Listener at localhost/38975] hbase.HBaseTestingUtility(1293): Minicluster is down