2023-07-18 19:14:32,029 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5 2023-07-18 19:14:32,046 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-18 19:14:32,065 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 19:14:32,065 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34, deleteOnExit=true 2023-07-18 19:14:32,065 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 19:14:32,066 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/test.cache.data in system properties and HBase conf 2023-07-18 19:14:32,067 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 19:14:32,067 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir in system properties and HBase conf 2023-07-18 19:14:32,068 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 19:14:32,068 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 19:14:32,068 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 19:14:32,232 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-18 19:14:32,754 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 19:14:32,758 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 19:14:32,759 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 19:14:32,759 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 19:14:32,759 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 19:14:32,759 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 19:14:32,760 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 19:14:32,760 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 19:14:32,761 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 19:14:32,761 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 19:14:32,761 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/nfs.dump.dir in system properties and HBase conf 2023-07-18 19:14:32,762 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/java.io.tmpdir in system properties and HBase conf 2023-07-18 19:14:32,762 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 19:14:32,762 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 19:14:32,763 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 19:14:33,285 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 19:14:33,289 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 19:14:33,617 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 19:14:33,816 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-18 19:14:33,830 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:14:33,876 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:14:33,911 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/java.io.tmpdir/Jetty_localhost_39509_hdfs____65kuwy/webapp 2023-07-18 19:14:34,055 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39509 2023-07-18 19:14:34,094 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 19:14:34,094 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 19:14:34,663 WARN [Listener at localhost/44967] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:14:34,767 WARN [Listener at localhost/44967] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:14:34,792 WARN [Listener at localhost/44967] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:14:34,798 INFO [Listener at localhost/44967] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:14:34,805 INFO [Listener at localhost/44967] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/java.io.tmpdir/Jetty_localhost_39337_datanode____1jo9w8/webapp 2023-07-18 19:14:34,951 INFO [Listener at localhost/44967] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39337 2023-07-18 19:14:35,424 WARN [Listener at localhost/41091] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:14:35,457 WARN [Listener at localhost/41091] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:14:35,462 WARN [Listener at localhost/41091] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:14:35,464 INFO [Listener at localhost/41091] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:14:35,470 INFO [Listener at localhost/41091] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/java.io.tmpdir/Jetty_localhost_37659_datanode____.487kwz/webapp 2023-07-18 19:14:35,585 INFO [Listener at localhost/41091] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37659 2023-07-18 19:14:35,596 WARN [Listener at localhost/43337] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:14:35,615 WARN [Listener at localhost/43337] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:14:35,618 WARN [Listener at localhost/43337] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:14:35,619 INFO [Listener at localhost/43337] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:14:35,625 INFO [Listener at localhost/43337] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/java.io.tmpdir/Jetty_localhost_43233_datanode____.1ys12e/webapp 2023-07-18 19:14:35,749 INFO [Listener at localhost/43337] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43233 2023-07-18 19:14:35,758 WARN [Listener at localhost/40787] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:14:36,005 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb5c06a0852d64d4f: Processing first storage report for DS-2103302f-84d7-4ff9-aaf8-b2138d78776d from datanode 4ed0c958-2f47-4e92-90ec-43c457654399 2023-07-18 19:14:36,007 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb5c06a0852d64d4f: from storage DS-2103302f-84d7-4ff9-aaf8-b2138d78776d node DatanodeRegistration(127.0.0.1:42397, datanodeUuid=4ed0c958-2f47-4e92-90ec-43c457654399, infoPort=41955, infoSecurePort=0, ipcPort=43337, storageInfo=lv=-57;cid=testClusterID;nsid=2099053805;c=1689707673371), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-18 19:14:36,007 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c4c0bdd0107b74c: Processing first storage report for DS-ca485556-ee09-4bc3-9270-847b7b30f4d3 from datanode a52f75a6-c146-4051-8542-c731e3261369 2023-07-18 19:14:36,007 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c4c0bdd0107b74c: from storage DS-ca485556-ee09-4bc3-9270-847b7b30f4d3 node DatanodeRegistration(127.0.0.1:33839, datanodeUuid=a52f75a6-c146-4051-8542-c731e3261369, infoPort=41999, infoSecurePort=0, ipcPort=40787, storageInfo=lv=-57;cid=testClusterID;nsid=2099053805;c=1689707673371), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:14:36,007 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcc0e75a3deb09d4: Processing first storage report for DS-ca4a6244-a1d0-4141-9e63-a51dd88baded from datanode b5c05d4a-6a19-4843-b69c-c57368f68df2 2023-07-18 19:14:36,007 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcc0e75a3deb09d4: from storage DS-ca4a6244-a1d0-4141-9e63-a51dd88baded node DatanodeRegistration(127.0.0.1:46877, datanodeUuid=b5c05d4a-6a19-4843-b69c-c57368f68df2, infoPort=36027, infoSecurePort=0, ipcPort=41091, storageInfo=lv=-57;cid=testClusterID;nsid=2099053805;c=1689707673371), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:14:36,008 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb5c06a0852d64d4f: Processing first storage report for DS-4612bc85-0dc4-4fcf-bcb3-7dd885941619 from datanode 4ed0c958-2f47-4e92-90ec-43c457654399 2023-07-18 19:14:36,008 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb5c06a0852d64d4f: from storage DS-4612bc85-0dc4-4fcf-bcb3-7dd885941619 node DatanodeRegistration(127.0.0.1:42397, datanodeUuid=4ed0c958-2f47-4e92-90ec-43c457654399, infoPort=41955, infoSecurePort=0, ipcPort=43337, storageInfo=lv=-57;cid=testClusterID;nsid=2099053805;c=1689707673371), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:14:36,008 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2c4c0bdd0107b74c: Processing first storage report for DS-b0e01913-202d-476e-adbb-3a4bb58c4a00 from datanode a52f75a6-c146-4051-8542-c731e3261369 2023-07-18 19:14:36,008 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2c4c0bdd0107b74c: from storage DS-b0e01913-202d-476e-adbb-3a4bb58c4a00 node DatanodeRegistration(127.0.0.1:33839, datanodeUuid=a52f75a6-c146-4051-8542-c731e3261369, infoPort=41999, infoSecurePort=0, ipcPort=40787, storageInfo=lv=-57;cid=testClusterID;nsid=2099053805;c=1689707673371), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:14:36,008 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcc0e75a3deb09d4: Processing first storage report for DS-4aed9258-ad45-4b47-b5f7-0115e2555707 from datanode b5c05d4a-6a19-4843-b69c-c57368f68df2 2023-07-18 19:14:36,008 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcc0e75a3deb09d4: from storage DS-4aed9258-ad45-4b47-b5f7-0115e2555707 node DatanodeRegistration(127.0.0.1:46877, datanodeUuid=b5c05d4a-6a19-4843-b69c-c57368f68df2, infoPort=36027, infoSecurePort=0, ipcPort=41091, storageInfo=lv=-57;cid=testClusterID;nsid=2099053805;c=1689707673371), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:14:36,223 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5 2023-07-18 19:14:36,303 INFO [Listener at localhost/40787] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/zookeeper_0, clientPort=62147, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 19:14:36,321 INFO [Listener at localhost/40787] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62147 2023-07-18 19:14:36,331 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:36,333 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:37,002 INFO [Listener at localhost/40787] util.FSUtils(471): Created version file at hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3 with version=8 2023-07-18 19:14:37,002 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/hbase-staging 2023-07-18 19:14:37,010 DEBUG [Listener at localhost/40787] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 19:14:37,010 DEBUG [Listener at localhost/40787] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 19:14:37,011 DEBUG [Listener at localhost/40787] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 19:14:37,011 DEBUG [Listener at localhost/40787] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 19:14:37,382 INFO [Listener at localhost/40787] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-18 19:14:38,012 INFO [Listener at localhost/40787] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:14:38,067 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:38,068 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:38,068 INFO [Listener at localhost/40787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:14:38,068 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:38,068 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:14:38,228 INFO [Listener at localhost/40787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:14:38,338 DEBUG [Listener at localhost/40787] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-18 19:14:38,453 INFO [Listener at localhost/40787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43617 2023-07-18 19:14:38,466 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:38,468 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:38,492 INFO [Listener at localhost/40787] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43617 connecting to ZooKeeper ensemble=127.0.0.1:62147 2023-07-18 19:14:38,544 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:436170x0, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:14:38,547 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43617-0x10179db857e0000 connected 2023-07-18 19:14:38,574 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:14:38,574 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:14:38,578 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:14:38,591 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43617 2023-07-18 19:14:38,591 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43617 2023-07-18 19:14:38,591 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43617 2023-07-18 19:14:38,593 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43617 2023-07-18 19:14:38,594 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43617 2023-07-18 19:14:38,632 INFO [Listener at localhost/40787] log.Log(170): Logging initialized @7335ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-18 19:14:38,791 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:14:38,792 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:14:38,793 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:14:38,795 INFO [Listener at localhost/40787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 19:14:38,795 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:14:38,795 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:14:38,800 INFO [Listener at localhost/40787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:14:38,865 INFO [Listener at localhost/40787] http.HttpServer(1146): Jetty bound to port 33409 2023-07-18 19:14:38,867 INFO [Listener at localhost/40787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:14:38,897 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:38,900 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3a1c2514{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:14:38,901 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:38,901 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ce7501{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:14:38,966 INFO [Listener at localhost/40787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:14:38,980 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:14:38,980 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:14:38,982 INFO [Listener at localhost/40787] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 19:14:38,989 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,015 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@14922087{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 19:14:39,029 INFO [Listener at localhost/40787] server.AbstractConnector(333): Started ServerConnector@7a905274{HTTP/1.1, (http/1.1)}{0.0.0.0:33409} 2023-07-18 19:14:39,029 INFO [Listener at localhost/40787] server.Server(415): Started @7733ms 2023-07-18 19:14:39,033 INFO [Listener at localhost/40787] master.HMaster(444): hbase.rootdir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3, hbase.cluster.distributed=false 2023-07-18 19:14:39,121 INFO [Listener at localhost/40787] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:14:39,121 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,121 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,122 INFO [Listener at localhost/40787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:14:39,122 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,122 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:14:39,129 INFO [Listener at localhost/40787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:14:39,132 INFO [Listener at localhost/40787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39561 2023-07-18 19:14:39,135 INFO [Listener at localhost/40787] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:14:39,142 DEBUG [Listener at localhost/40787] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:14:39,143 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:39,144 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:39,146 INFO [Listener at localhost/40787] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39561 connecting to ZooKeeper ensemble=127.0.0.1:62147 2023-07-18 19:14:39,151 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:395610x0, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:14:39,153 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39561-0x10179db857e0001 connected 2023-07-18 19:14:39,153 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:14:39,155 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:14:39,156 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:14:39,156 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39561 2023-07-18 19:14:39,158 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39561 2023-07-18 19:14:39,160 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39561 2023-07-18 19:14:39,161 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39561 2023-07-18 19:14:39,162 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39561 2023-07-18 19:14:39,166 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:14:39,166 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:14:39,166 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:14:39,167 INFO [Listener at localhost/40787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:14:39,168 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:14:39,168 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:14:39,168 INFO [Listener at localhost/40787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:14:39,170 INFO [Listener at localhost/40787] http.HttpServer(1146): Jetty bound to port 39633 2023-07-18 19:14:39,170 INFO [Listener at localhost/40787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:14:39,174 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,175 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3b5a29b4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:14:39,175 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,175 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@21f5379f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:14:39,187 INFO [Listener at localhost/40787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:14:39,188 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:14:39,189 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:14:39,189 INFO [Listener at localhost/40787] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 19:14:39,190 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,193 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@51e5ec5c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:14:39,195 INFO [Listener at localhost/40787] server.AbstractConnector(333): Started ServerConnector@147b8d82{HTTP/1.1, (http/1.1)}{0.0.0.0:39633} 2023-07-18 19:14:39,195 INFO [Listener at localhost/40787] server.Server(415): Started @7898ms 2023-07-18 19:14:39,208 INFO [Listener at localhost/40787] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:14:39,208 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,208 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,209 INFO [Listener at localhost/40787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:14:39,209 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,209 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:14:39,209 INFO [Listener at localhost/40787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:14:39,211 INFO [Listener at localhost/40787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41417 2023-07-18 19:14:39,212 INFO [Listener at localhost/40787] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:14:39,219 DEBUG [Listener at localhost/40787] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:14:39,220 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:39,222 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:39,223 INFO [Listener at localhost/40787] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41417 connecting to ZooKeeper ensemble=127.0.0.1:62147 2023-07-18 19:14:39,227 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:414170x0, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:14:39,228 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:414170x0, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:14:39,229 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:414170x0, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:14:39,230 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:414170x0, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:14:39,236 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41417-0x10179db857e0002 connected 2023-07-18 19:14:39,237 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41417 2023-07-18 19:14:39,237 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41417 2023-07-18 19:14:39,238 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41417 2023-07-18 19:14:39,245 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41417 2023-07-18 19:14:39,245 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41417 2023-07-18 19:14:39,248 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:14:39,249 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:14:39,249 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:14:39,249 INFO [Listener at localhost/40787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:14:39,250 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:14:39,250 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:14:39,250 INFO [Listener at localhost/40787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:14:39,251 INFO [Listener at localhost/40787] http.HttpServer(1146): Jetty bound to port 42835 2023-07-18 19:14:39,251 INFO [Listener at localhost/40787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:14:39,253 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,254 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6eb4fc00{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:14:39,254 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,255 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d6b47ba{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:14:39,266 INFO [Listener at localhost/40787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:14:39,267 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:14:39,268 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:14:39,268 INFO [Listener at localhost/40787] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:14:39,269 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,271 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@141febe3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:14:39,272 INFO [Listener at localhost/40787] server.AbstractConnector(333): Started ServerConnector@4a909f08{HTTP/1.1, (http/1.1)}{0.0.0.0:42835} 2023-07-18 19:14:39,272 INFO [Listener at localhost/40787] server.Server(415): Started @7975ms 2023-07-18 19:14:39,287 INFO [Listener at localhost/40787] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:14:39,287 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,287 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,288 INFO [Listener at localhost/40787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:14:39,288 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:39,288 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:14:39,288 INFO [Listener at localhost/40787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:14:39,290 INFO [Listener at localhost/40787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36387 2023-07-18 19:14:39,290 INFO [Listener at localhost/40787] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:14:39,291 DEBUG [Listener at localhost/40787] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:14:39,293 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:39,295 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:39,296 INFO [Listener at localhost/40787] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36387 connecting to ZooKeeper ensemble=127.0.0.1:62147 2023-07-18 19:14:39,299 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:363870x0, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:14:39,300 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36387-0x10179db857e0003 connected 2023-07-18 19:14:39,300 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:14:39,301 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:14:39,301 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:14:39,302 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36387 2023-07-18 19:14:39,302 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36387 2023-07-18 19:14:39,303 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36387 2023-07-18 19:14:39,303 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36387 2023-07-18 19:14:39,303 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36387 2023-07-18 19:14:39,305 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:14:39,306 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:14:39,306 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:14:39,306 INFO [Listener at localhost/40787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:14:39,306 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:14:39,306 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:14:39,307 INFO [Listener at localhost/40787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:14:39,308 INFO [Listener at localhost/40787] http.HttpServer(1146): Jetty bound to port 34635 2023-07-18 19:14:39,308 INFO [Listener at localhost/40787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:14:39,310 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,310 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7abf9a1c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:14:39,311 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,311 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c2c946a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:14:39,320 INFO [Listener at localhost/40787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:14:39,321 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:14:39,321 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:14:39,321 INFO [Listener at localhost/40787] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 19:14:39,322 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:39,323 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@13359f3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:14:39,324 INFO [Listener at localhost/40787] server.AbstractConnector(333): Started ServerConnector@86ca53{HTTP/1.1, (http/1.1)}{0.0.0.0:34635} 2023-07-18 19:14:39,324 INFO [Listener at localhost/40787] server.Server(415): Started @8027ms 2023-07-18 19:14:39,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:14:39,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4fd4c8e1{HTTP/1.1, (http/1.1)}{0.0.0.0:33599} 2023-07-18 19:14:39,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8036ms 2023-07-18 19:14:39,333 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:14:39,344 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 19:14:39,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:14:39,365 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:14:39,365 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:14:39,365 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:14:39,365 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:14:39,366 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:39,368 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 19:14:39,369 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 19:14:39,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43617,1689707677179 from backup master directory 2023-07-18 19:14:39,373 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:14:39,373 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 19:14:39,374 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:14:39,374 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:14:39,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-18 19:14:39,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-18 19:14:39,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/hbase.id with ID: fb571e25-a6b4-4dee-a3ee-d614c0515106 2023-07-18 19:14:39,530 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:39,549 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:39,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4b1671e2 to 127.0.0.1:62147 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:14:39,657 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55b178a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:14:39,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:39,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 19:14:39,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-18 19:14:39,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-18 19:14:39,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 19:14:39,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 19:14:39,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:14:39,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store-tmp 2023-07-18 19:14:39,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:39,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 19:14:39,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:14:39,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:14:39,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 19:14:39,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:14:39,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:14:39,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:14:39,801 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/WALs/jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:14:39,822 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43617%2C1689707677179, suffix=, logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/WALs/jenkins-hbase4.apache.org,43617,1689707677179, archiveDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/oldWALs, maxLogs=10 2023-07-18 19:14:39,883 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK] 2023-07-18 19:14:39,883 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK] 2023-07-18 19:14:39,883 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK] 2023-07-18 19:14:39,891 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-18 19:14:39,961 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/WALs/jenkins-hbase4.apache.org,43617,1689707677179/jenkins-hbase4.apache.org%2C43617%2C1689707677179.1689707679835 2023-07-18 19:14:39,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK], DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK], DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK]] 2023-07-18 19:14:39,962 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:39,963 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:39,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:14:39,969 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:14:40,052 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:14:40,060 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 19:14:40,113 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 19:14:40,132 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:40,138 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:14:40,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:14:40,167 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:14:40,175 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:40,176 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10092216160, jitterRate=-0.060089126229286194}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:40,177 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:14:40,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 19:14:40,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 19:14:40,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 19:14:40,221 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 19:14:40,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-18 19:14:40,279 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 54 msec 2023-07-18 19:14:40,279 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 19:14:40,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 19:14:40,310 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 19:14:40,318 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 19:14:40,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 19:14:40,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 19:14:40,338 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:40,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 19:14:40,340 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 19:14:40,358 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 19:14:40,364 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:14:40,364 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:14:40,364 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:14:40,364 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:14:40,364 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:40,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43617,1689707677179, sessionid=0x10179db857e0000, setting cluster-up flag (Was=false) 2023-07-18 19:14:40,389 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:40,396 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 19:14:40,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:14:40,402 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:40,408 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 19:14:40,410 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:14:40,412 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.hbase-snapshot/.tmp 2023-07-18 19:14:40,434 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(951): ClusterId : fb571e25-a6b4-4dee-a3ee-d614c0515106 2023-07-18 19:14:40,437 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(951): ClusterId : fb571e25-a6b4-4dee-a3ee-d614c0515106 2023-07-18 19:14:40,438 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(951): ClusterId : fb571e25-a6b4-4dee-a3ee-d614c0515106 2023-07-18 19:14:40,444 DEBUG [RS:2;jenkins-hbase4:36387] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:14:40,444 DEBUG [RS:0;jenkins-hbase4:39561] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:14:40,446 DEBUG [RS:1;jenkins-hbase4:41417] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:14:40,458 DEBUG [RS:2;jenkins-hbase4:36387] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:14:40,458 DEBUG [RS:1;jenkins-hbase4:41417] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:14:40,458 DEBUG [RS:0;jenkins-hbase4:39561] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:14:40,458 DEBUG [RS:1;jenkins-hbase4:41417] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:14:40,458 DEBUG [RS:2;jenkins-hbase4:36387] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:14:40,458 DEBUG [RS:0;jenkins-hbase4:39561] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:14:40,463 DEBUG [RS:2;jenkins-hbase4:36387] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:14:40,463 DEBUG [RS:1;jenkins-hbase4:41417] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:14:40,463 DEBUG [RS:0;jenkins-hbase4:39561] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:14:40,465 DEBUG [RS:2;jenkins-hbase4:36387] zookeeper.ReadOnlyZKClient(139): Connect 0x65553943 to 127.0.0.1:62147 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:14:40,466 DEBUG [RS:0;jenkins-hbase4:39561] zookeeper.ReadOnlyZKClient(139): Connect 0x0e0e1d9b to 127.0.0.1:62147 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:14:40,466 DEBUG [RS:1;jenkins-hbase4:41417] zookeeper.ReadOnlyZKClient(139): Connect 0x504f2fba to 127.0.0.1:62147 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:14:40,485 DEBUG [RS:2;jenkins-hbase4:36387] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b92cb99, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:14:40,485 DEBUG [RS:0;jenkins-hbase4:39561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c9f22a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:14:40,485 DEBUG [RS:2;jenkins-hbase4:36387] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6fad5233, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:14:40,486 DEBUG [RS:0;jenkins-hbase4:39561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f676eb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:14:40,487 DEBUG [RS:1;jenkins-hbase4:41417] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e88dbc3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:14:40,487 DEBUG [RS:1;jenkins-hbase4:41417] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ab744ca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:14:40,519 DEBUG [RS:2;jenkins-hbase4:36387] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36387 2023-07-18 19:14:40,521 DEBUG [RS:0;jenkins-hbase4:39561] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39561 2023-07-18 19:14:40,521 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41417 2023-07-18 19:14:40,526 INFO [RS:2;jenkins-hbase4:36387] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:14:40,527 INFO [RS:1;jenkins-hbase4:41417] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:14:40,527 INFO [RS:1;jenkins-hbase4:41417] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:14:40,526 INFO [RS:0;jenkins-hbase4:39561] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:14:40,527 INFO [RS:0;jenkins-hbase4:39561] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:14:40,527 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:14:40,527 INFO [RS:2;jenkins-hbase4:36387] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:14:40,528 DEBUG [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:14:40,527 DEBUG [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:14:40,531 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43617,1689707677179 with isa=jenkins-hbase4.apache.org/172.31.14.131:39561, startcode=1689707679120 2023-07-18 19:14:40,531 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43617,1689707677179 with isa=jenkins-hbase4.apache.org/172.31.14.131:36387, startcode=1689707679286 2023-07-18 19:14:40,531 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43617,1689707677179 with isa=jenkins-hbase4.apache.org/172.31.14.131:41417, startcode=1689707679207 2023-07-18 19:14:40,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 19:14:40,593 DEBUG [RS:1;jenkins-hbase4:41417] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:14:40,593 DEBUG [RS:2;jenkins-hbase4:36387] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:14:40,593 DEBUG [RS:0;jenkins-hbase4:39561] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:14:40,603 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 19:14:40,605 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:14:40,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 19:14:40,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 19:14:40,657 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45259, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:14:40,657 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57493, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:14:40,657 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57287, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:14:40,670 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:40,685 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:40,686 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:40,706 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 19:14:40,713 DEBUG [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 19:14:40,713 DEBUG [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 19:14:40,713 WARN [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 19:14:40,713 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 19:14:40,713 WARN [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 19:14:40,714 WARN [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 19:14:40,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 19:14:40,760 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 19:14:40,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 19:14:40,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 19:14:40,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:14:40,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:14:40,763 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:14:40,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:14:40,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 19:14:40,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:14:40,764 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689707710765 2023-07-18 19:14:40,767 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 19:14:40,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 19:14:40,773 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 19:14:40,774 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 19:14:40,777 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:40,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 19:14:40,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 19:14:40,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 19:14:40,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 19:14:40,785 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 19:14:40,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 19:14:40,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 19:14:40,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 19:14:40,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 19:14:40,802 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707680802,5,FailOnTimeoutGroup] 2023-07-18 19:14:40,803 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707680802,5,FailOnTimeoutGroup] 2023-07-18 19:14:40,803 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 19:14:40,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,814 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43617,1689707677179 with isa=jenkins-hbase4.apache.org/172.31.14.131:39561, startcode=1689707679120 2023-07-18 19:14:40,815 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43617,1689707677179 with isa=jenkins-hbase4.apache.org/172.31.14.131:36387, startcode=1689707679286 2023-07-18 19:14:40,815 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43617,1689707677179 with isa=jenkins-hbase4.apache.org/172.31.14.131:41417, startcode=1689707679207 2023-07-18 19:14:40,820 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43617] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:40,821 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:14:40,822 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 19:14:40,827 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43617] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:40,827 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:14:40,828 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 19:14:40,828 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43617] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:40,829 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:14:40,829 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 19:14:40,838 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3 2023-07-18 19:14:40,838 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44967 2023-07-18 19:14:40,838 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33409 2023-07-18 19:14:40,839 DEBUG [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3 2023-07-18 19:14:40,840 DEBUG [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44967 2023-07-18 19:14:40,840 DEBUG [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33409 2023-07-18 19:14:40,851 DEBUG [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3 2023-07-18 19:14:40,852 DEBUG [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44967 2023-07-18 19:14:40,852 DEBUG [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33409 2023-07-18 19:14:40,853 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:14:40,859 DEBUG [RS:1;jenkins-hbase4:41417] zookeeper.ZKUtil(162): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:40,859 WARN [RS:1;jenkins-hbase4:41417] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:14:40,859 INFO [RS:1;jenkins-hbase4:41417] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:14:40,859 DEBUG [RS:2;jenkins-hbase4:36387] zookeeper.ZKUtil(162): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:40,859 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:40,860 WARN [RS:2;jenkins-hbase4:36387] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:14:40,860 INFO [RS:2;jenkins-hbase4:36387] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:14:40,863 DEBUG [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:40,863 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36387,1689707679286] 2023-07-18 19:14:40,863 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39561,1689707679120] 2023-07-18 19:14:40,863 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41417,1689707679207] 2023-07-18 19:14:40,865 DEBUG [RS:0;jenkins-hbase4:39561] zookeeper.ZKUtil(162): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:40,865 WARN [RS:0;jenkins-hbase4:39561] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:14:40,865 INFO [RS:0;jenkins-hbase4:39561] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:14:40,865 DEBUG [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:40,889 DEBUG [RS:2;jenkins-hbase4:36387] zookeeper.ZKUtil(162): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:40,889 DEBUG [RS:0;jenkins-hbase4:39561] zookeeper.ZKUtil(162): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:40,890 DEBUG [RS:1;jenkins-hbase4:41417] zookeeper.ZKUtil(162): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:40,891 DEBUG [RS:2;jenkins-hbase4:36387] zookeeper.ZKUtil(162): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:40,891 DEBUG [RS:0;jenkins-hbase4:39561] zookeeper.ZKUtil(162): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:40,892 DEBUG [RS:2;jenkins-hbase4:36387] zookeeper.ZKUtil(162): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:40,892 DEBUG [RS:0;jenkins-hbase4:39561] zookeeper.ZKUtil(162): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:40,893 DEBUG [RS:1;jenkins-hbase4:41417] zookeeper.ZKUtil(162): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:40,896 DEBUG [RS:1;jenkins-hbase4:41417] zookeeper.ZKUtil(162): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:40,896 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:40,897 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:40,898 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3 2023-07-18 19:14:40,910 DEBUG [RS:2;jenkins-hbase4:36387] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:14:40,911 DEBUG [RS:0;jenkins-hbase4:39561] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:14:40,911 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:14:40,930 INFO [RS:0;jenkins-hbase4:39561] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:14:40,931 INFO [RS:1;jenkins-hbase4:41417] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:14:40,930 INFO [RS:2;jenkins-hbase4:36387] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:14:40,937 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:40,940 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 19:14:40,942 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info 2023-07-18 19:14:40,943 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 19:14:40,944 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:40,945 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 19:14:40,949 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:14:40,949 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 19:14:40,954 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:40,954 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 19:14:40,956 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table 2023-07-18 19:14:40,958 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 19:14:40,959 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:40,961 INFO [RS:0;jenkins-hbase4:39561] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:14:40,961 INFO [RS:2;jenkins-hbase4:36387] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:14:40,965 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740 2023-07-18 19:14:40,965 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740 2023-07-18 19:14:40,970 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 19:14:40,972 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 19:14:40,979 INFO [RS:1;jenkins-hbase4:41417] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:14:40,981 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:40,982 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11568656480, jitterRate=0.0774150937795639}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 19:14:40,982 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 19:14:40,982 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 19:14:40,982 INFO [RS:0;jenkins-hbase4:39561] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:14:40,983 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 19:14:40,982 INFO [RS:1;jenkins-hbase4:41417] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:14:40,982 INFO [RS:2;jenkins-hbase4:36387] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:14:40,984 INFO [RS:1;jenkins-hbase4:41417] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,983 INFO [RS:0;jenkins-hbase4:39561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,983 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 19:14:40,984 INFO [RS:2;jenkins-hbase4:36387] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,984 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 19:14:40,984 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 19:14:40,984 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:14:40,987 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:14:40,987 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:14:40,990 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 19:14:40,990 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 19:14:40,997 INFO [RS:0;jenkins-hbase4:39561] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,997 INFO [RS:2;jenkins-hbase4:36387] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,997 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,998 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,998 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,998 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,998 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,998 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,999 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,999 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:14:40,999 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,999 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,999 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,999 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,999 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:40,999 INFO [RS:1;jenkins-hbase4:41417] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:40,999 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:14:40,999 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 19:14:40,999 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,000 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,000 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 19:14:41,000 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,000 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,000 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,000 DEBUG [RS:0;jenkins-hbase4:39561] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,000 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,000 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,000 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,001 DEBUG [RS:2;jenkins-hbase4:36387] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,001 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,001 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:14:41,001 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,001 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,001 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,001 DEBUG [RS:1;jenkins-hbase4:41417] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:41,004 INFO [RS:2;jenkins-hbase4:36387] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,004 INFO [RS:2;jenkins-hbase4:36387] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,004 INFO [RS:2;jenkins-hbase4:36387] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,009 INFO [RS:1;jenkins-hbase4:41417] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,009 INFO [RS:0;jenkins-hbase4:39561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,009 INFO [RS:1;jenkins-hbase4:41417] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,010 INFO [RS:0;jenkins-hbase4:39561] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,010 INFO [RS:1;jenkins-hbase4:41417] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,010 INFO [RS:0;jenkins-hbase4:39561] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,010 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 19:14:41,023 INFO [RS:2;jenkins-hbase4:36387] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:14:41,025 INFO [RS:1;jenkins-hbase4:41417] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:14:41,028 INFO [RS:1;jenkins-hbase4:41417] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41417,1689707679207-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,028 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 19:14:41,028 INFO [RS:2;jenkins-hbase4:36387] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36387,1689707679286-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,031 INFO [RS:0;jenkins-hbase4:39561] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:14:41,036 INFO [RS:0;jenkins-hbase4:39561] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39561,1689707679120-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,043 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 19:14:41,057 INFO [RS:1;jenkins-hbase4:41417] regionserver.Replication(203): jenkins-hbase4.apache.org,41417,1689707679207 started 2023-07-18 19:14:41,057 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41417,1689707679207, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41417, sessionid=0x10179db857e0002 2023-07-18 19:14:41,058 DEBUG [RS:1;jenkins-hbase4:41417] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:14:41,058 DEBUG [RS:1;jenkins-hbase4:41417] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:41,058 DEBUG [RS:1;jenkins-hbase4:41417] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41417,1689707679207' 2023-07-18 19:14:41,058 DEBUG [RS:1;jenkins-hbase4:41417] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:14:41,059 DEBUG [RS:1;jenkins-hbase4:41417] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:14:41,060 DEBUG [RS:1;jenkins-hbase4:41417] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:14:41,060 DEBUG [RS:1;jenkins-hbase4:41417] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:14:41,060 DEBUG [RS:1;jenkins-hbase4:41417] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:41,060 DEBUG [RS:1;jenkins-hbase4:41417] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41417,1689707679207' 2023-07-18 19:14:41,060 DEBUG [RS:1;jenkins-hbase4:41417] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:14:41,061 DEBUG [RS:1;jenkins-hbase4:41417] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:14:41,061 DEBUG [RS:1;jenkins-hbase4:41417] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:14:41,061 INFO [RS:1;jenkins-hbase4:41417] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 19:14:41,062 INFO [RS:1;jenkins-hbase4:41417] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 19:14:41,065 INFO [RS:2;jenkins-hbase4:36387] regionserver.Replication(203): jenkins-hbase4.apache.org,36387,1689707679286 started 2023-07-18 19:14:41,065 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36387,1689707679286, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36387, sessionid=0x10179db857e0003 2023-07-18 19:14:41,065 DEBUG [RS:2;jenkins-hbase4:36387] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:14:41,065 DEBUG [RS:2;jenkins-hbase4:36387] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:41,066 DEBUG [RS:2;jenkins-hbase4:36387] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36387,1689707679286' 2023-07-18 19:14:41,067 DEBUG [RS:2;jenkins-hbase4:36387] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:14:41,068 DEBUG [RS:2;jenkins-hbase4:36387] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:14:41,068 DEBUG [RS:2;jenkins-hbase4:36387] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:14:41,068 DEBUG [RS:2;jenkins-hbase4:36387] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:14:41,068 DEBUG [RS:2;jenkins-hbase4:36387] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:41,069 DEBUG [RS:2;jenkins-hbase4:36387] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36387,1689707679286' 2023-07-18 19:14:41,069 DEBUG [RS:2;jenkins-hbase4:36387] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:14:41,071 DEBUG [RS:2;jenkins-hbase4:36387] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:14:41,071 DEBUG [RS:2;jenkins-hbase4:36387] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:14:41,071 INFO [RS:0;jenkins-hbase4:39561] regionserver.Replication(203): jenkins-hbase4.apache.org,39561,1689707679120 started 2023-07-18 19:14:41,072 INFO [RS:2;jenkins-hbase4:36387] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 19:14:41,072 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39561,1689707679120, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39561, sessionid=0x10179db857e0001 2023-07-18 19:14:41,072 INFO [RS:2;jenkins-hbase4:36387] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 19:14:41,073 DEBUG [RS:0;jenkins-hbase4:39561] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:14:41,073 DEBUG [RS:0;jenkins-hbase4:39561] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:41,073 DEBUG [RS:0;jenkins-hbase4:39561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39561,1689707679120' 2023-07-18 19:14:41,073 DEBUG [RS:0;jenkins-hbase4:39561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:14:41,074 DEBUG [RS:0;jenkins-hbase4:39561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:14:41,074 DEBUG [RS:0;jenkins-hbase4:39561] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:14:41,074 DEBUG [RS:0;jenkins-hbase4:39561] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:14:41,074 DEBUG [RS:0;jenkins-hbase4:39561] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:41,075 DEBUG [RS:0;jenkins-hbase4:39561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39561,1689707679120' 2023-07-18 19:14:41,075 DEBUG [RS:0;jenkins-hbase4:39561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:14:41,075 DEBUG [RS:0;jenkins-hbase4:39561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:14:41,076 DEBUG [RS:0;jenkins-hbase4:39561] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:14:41,076 INFO [RS:0;jenkins-hbase4:39561] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 19:14:41,076 INFO [RS:0;jenkins-hbase4:39561] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 19:14:41,174 INFO [RS:1;jenkins-hbase4:41417] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41417%2C1689707679207, suffix=, logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,41417,1689707679207, archiveDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs, maxLogs=32 2023-07-18 19:14:41,177 INFO [RS:2;jenkins-hbase4:36387] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36387%2C1689707679286, suffix=, logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,36387,1689707679286, archiveDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs, maxLogs=32 2023-07-18 19:14:41,187 INFO [RS:0;jenkins-hbase4:39561] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39561%2C1689707679120, suffix=, logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,39561,1689707679120, archiveDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs, maxLogs=32 2023-07-18 19:14:41,198 DEBUG [jenkins-hbase4:43617] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 19:14:41,225 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK] 2023-07-18 19:14:41,233 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK] 2023-07-18 19:14:41,233 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK] 2023-07-18 19:14:41,234 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK] 2023-07-18 19:14:41,235 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK] 2023-07-18 19:14:41,249 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK] 2023-07-18 19:14:41,249 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK] 2023-07-18 19:14:41,249 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK] 2023-07-18 19:14:41,249 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK] 2023-07-18 19:14:41,251 DEBUG [jenkins-hbase4:43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:41,255 DEBUG [jenkins-hbase4:43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:41,255 DEBUG [jenkins-hbase4:43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:41,255 DEBUG [jenkins-hbase4:43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:41,255 DEBUG [jenkins-hbase4:43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:41,263 INFO [RS:2;jenkins-hbase4:36387] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,36387,1689707679286/jenkins-hbase4.apache.org%2C36387%2C1689707679286.1689707681183 2023-07-18 19:14:41,263 INFO [RS:1;jenkins-hbase4:41417] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,41417,1689707679207/jenkins-hbase4.apache.org%2C41417%2C1689707679207.1689707681184 2023-07-18 19:14:41,263 INFO [RS:0;jenkins-hbase4:39561] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,39561,1689707679120/jenkins-hbase4.apache.org%2C39561%2C1689707679120.1689707681189 2023-07-18 19:14:41,267 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36387,1689707679286, state=OPENING 2023-07-18 19:14:41,267 DEBUG [RS:1;jenkins-hbase4:41417] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK], DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK], DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK]] 2023-07-18 19:14:41,270 DEBUG [RS:0;jenkins-hbase4:39561] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK], DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK], DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK]] 2023-07-18 19:14:41,270 DEBUG [RS:2;jenkins-hbase4:36387] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK], DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK], DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK]] 2023-07-18 19:14:41,277 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 19:14:41,279 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:41,280 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:14:41,285 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:41,359 WARN [ReadOnlyZKClient-127.0.0.1:62147@0x4b1671e2] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 19:14:41,393 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:41,398 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55112, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:41,400 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36387] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:55112 deadline: 1689707741398, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:41,468 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:41,476 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:14:41,482 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55116, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:14:41,493 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 19:14:41,493 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:14:41,498 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36387%2C1689707679286.meta, suffix=.meta, logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,36387,1689707679286, archiveDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs, maxLogs=32 2023-07-18 19:14:41,525 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK] 2023-07-18 19:14:41,526 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK] 2023-07-18 19:14:41,528 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK] 2023-07-18 19:14:41,538 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,36387,1689707679286/jenkins-hbase4.apache.org%2C36387%2C1689707679286.meta.1689707681499.meta 2023-07-18 19:14:41,542 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK], DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK], DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK]] 2023-07-18 19:14:41,542 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:41,544 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:14:41,547 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 19:14:41,549 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 19:14:41,554 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 19:14:41,554 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:41,554 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 19:14:41,554 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 19:14:41,558 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 19:14:41,560 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info 2023-07-18 19:14:41,560 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info 2023-07-18 19:14:41,561 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 19:14:41,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:41,563 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 19:14:41,564 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:14:41,564 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:14:41,565 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 19:14:41,566 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:41,566 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 19:14:41,567 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table 2023-07-18 19:14:41,567 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table 2023-07-18 19:14:41,568 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 19:14:41,569 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:41,570 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740 2023-07-18 19:14:41,573 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740 2023-07-18 19:14:41,577 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 19:14:41,580 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 19:14:41,584 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9943570080, jitterRate=-0.07393287122249603}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 19:14:41,584 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 19:14:41,598 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689707681465 2023-07-18 19:14:41,625 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 19:14:41,626 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 19:14:41,627 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36387,1689707679286, state=OPEN 2023-07-18 19:14:41,632 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:14:41,632 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:14:41,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 19:14:41,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36387,1689707679286 in 347 msec 2023-07-18 19:14:41,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 19:14:41,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 630 msec 2023-07-18 19:14:41,649 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0330 sec 2023-07-18 19:14:41,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689707681650, completionTime=-1 2023-07-18 19:14:41,650 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 19:14:41,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 19:14:41,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 19:14:41,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689707741755 2023-07-18 19:14:41,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689707801755 2023-07-18 19:14:41,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 105 msec 2023-07-18 19:14:41,776 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43617,1689707677179-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,777 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43617,1689707677179-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,777 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43617,1689707677179-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,780 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43617, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:41,817 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 19:14:41,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 19:14:41,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:41,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 19:14:41,872 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:14:41,876 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:14:41,892 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:41,896 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea empty. 2023-07-18 19:14:41,896 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:41,897 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 19:14:41,917 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:41,919 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 19:14:41,930 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:14:41,941 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:14:41,952 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:41,953 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575 empty. 2023-07-18 19:14:41,954 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:41,954 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 19:14:41,954 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:41,957 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 13ce679c9b6de2684bc3af2f72b426ea, NAME => 'hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:41,988 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:41,988 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 13ce679c9b6de2684bc3af2f72b426ea, disabling compactions & flushes 2023-07-18 19:14:41,988 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:41,988 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:41,988 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. after waiting 0 ms 2023-07-18 19:14:41,988 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:41,989 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:41,989 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 13ce679c9b6de2684bc3af2f72b426ea: 2023-07-18 19:14:42,014 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:42,017 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:14:42,036 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7af7ba814960ac41543f63d97428e575, NAME => 'hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:42,066 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707682021"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707682021"}]},"ts":"1689707682021"} 2023-07-18 19:14:42,197 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:42,198 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 7af7ba814960ac41543f63d97428e575, disabling compactions & flushes 2023-07-18 19:14:42,198 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:42,198 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:42,198 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. after waiting 0 ms 2023-07-18 19:14:42,198 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:42,198 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:42,198 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 7af7ba814960ac41543f63d97428e575: 2023-07-18 19:14:42,206 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:14:42,207 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707682207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707682207"}]},"ts":"1689707682207"} 2023-07-18 19:14:42,209 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:14:42,213 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:14:42,213 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:14:42,219 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:14:42,220 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707682213"}]},"ts":"1689707682213"} 2023-07-18 19:14:42,220 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707682219"}]},"ts":"1689707682219"} 2023-07-18 19:14:42,226 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 19:14:42,226 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 19:14:42,233 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:42,234 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:42,234 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:42,234 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:42,234 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:42,234 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:42,235 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:42,235 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:42,235 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:42,235 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:42,237 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=13ce679c9b6de2684bc3af2f72b426ea, ASSIGN}] 2023-07-18 19:14:42,237 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7af7ba814960ac41543f63d97428e575, ASSIGN}] 2023-07-18 19:14:42,241 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=13ce679c9b6de2684bc3af2f72b426ea, ASSIGN 2023-07-18 19:14:42,245 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7af7ba814960ac41543f63d97428e575, ASSIGN 2023-07-18 19:14:42,246 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=13ce679c9b6de2684bc3af2f72b426ea, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:42,253 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=7af7ba814960ac41543f63d97428e575, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39561,1689707679120; forceNewPlan=false, retain=false 2023-07-18 19:14:42,254 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 19:14:42,257 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=13ce679c9b6de2684bc3af2f72b426ea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:42,257 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707682256"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707682256"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707682256"}]},"ts":"1689707682256"} 2023-07-18 19:14:42,265 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=7af7ba814960ac41543f63d97428e575, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:42,265 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707682265"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707682265"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707682265"}]},"ts":"1689707682265"} 2023-07-18 19:14:42,266 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 13ce679c9b6de2684bc3af2f72b426ea, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:42,272 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 7af7ba814960ac41543f63d97428e575, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:42,428 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:42,428 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 13ce679c9b6de2684bc3af2f72b426ea, NAME => 'hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:42,430 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:42,430 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:42,430 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:42,430 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:42,431 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:42,432 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:14:42,435 INFO [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:42,436 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48244, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:14:42,438 DEBUG [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/info 2023-07-18 19:14:42,439 DEBUG [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/info 2023-07-18 19:14:42,439 INFO [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 13ce679c9b6de2684bc3af2f72b426ea columnFamilyName info 2023-07-18 19:14:42,440 INFO [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] regionserver.HStore(310): Store=13ce679c9b6de2684bc3af2f72b426ea/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:42,441 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:42,442 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:42,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:42,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7af7ba814960ac41543f63d97428e575, NAME => 'hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:42,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:14:42,443 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. service=MultiRowMutationService 2023-07-18 19:14:42,444 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 19:14:42,444 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:42,444 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:42,444 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:42,444 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:42,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:42,449 INFO [StoreOpener-7af7ba814960ac41543f63d97428e575-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:42,452 DEBUG [StoreOpener-7af7ba814960ac41543f63d97428e575-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m 2023-07-18 19:14:42,452 DEBUG [StoreOpener-7af7ba814960ac41543f63d97428e575-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m 2023-07-18 19:14:42,453 INFO [StoreOpener-7af7ba814960ac41543f63d97428e575-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7af7ba814960ac41543f63d97428e575 columnFamilyName m 2023-07-18 19:14:42,454 INFO [StoreOpener-7af7ba814960ac41543f63d97428e575-1] regionserver.HStore(310): Store=7af7ba814960ac41543f63d97428e575/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:42,455 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:42,456 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:42,457 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:42,459 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 13ce679c9b6de2684bc3af2f72b426ea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10139845440, jitterRate=-0.05565330386161804}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:42,459 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 13ce679c9b6de2684bc3af2f72b426ea: 2023-07-18 19:14:42,461 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea., pid=8, masterSystemTime=1689707682420 2023-07-18 19:14:42,462 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:42,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:42,466 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:42,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:42,468 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7af7ba814960ac41543f63d97428e575; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1c1509cb, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:42,468 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=13ce679c9b6de2684bc3af2f72b426ea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:42,469 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7af7ba814960ac41543f63d97428e575: 2023-07-18 19:14:42,469 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707682468"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707682468"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707682468"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707682468"}]},"ts":"1689707682468"} 2023-07-18 19:14:42,470 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575., pid=9, masterSystemTime=1689707682431 2023-07-18 19:14:42,477 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:42,478 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:42,483 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=7af7ba814960ac41543f63d97428e575, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:42,484 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707682483"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707682483"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707682483"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707682483"}]},"ts":"1689707682483"} 2023-07-18 19:14:42,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 19:14:42,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 13ce679c9b6de2684bc3af2f72b426ea, server=jenkins-hbase4.apache.org,36387,1689707679286 in 209 msec 2023-07-18 19:14:42,494 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 19:14:42,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 7af7ba814960ac41543f63d97428e575, server=jenkins-hbase4.apache.org,39561,1689707679120 in 216 msec 2023-07-18 19:14:42,496 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-18 19:14:42,496 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=13ce679c9b6de2684bc3af2f72b426ea, ASSIGN in 252 msec 2023-07-18 19:14:42,497 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:14:42,498 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707682498"}]},"ts":"1689707682498"} 2023-07-18 19:14:42,500 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 19:14:42,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=7af7ba814960ac41543f63d97428e575, ASSIGN in 258 msec 2023-07-18 19:14:42,502 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 19:14:42,502 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:14:42,503 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707682502"}]},"ts":"1689707682502"} 2023-07-18 19:14:42,507 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 19:14:42,507 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:14:42,520 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:14:42,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 649 msec 2023-07-18 19:14:42,528 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 603 msec 2023-07-18 19:14:42,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 19:14:42,576 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:14:42,576 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:42,578 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:42,583 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48256, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:42,586 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 19:14:42,586 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 19:14:42,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 19:14:42,637 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:14:42,643 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 35 msec 2023-07-18 19:14:42,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 19:14:42,668 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:14:42,685 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:42,685 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:42,688 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 19:14:42,690 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 23 msec 2023-07-18 19:14:42,696 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 19:14:42,701 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 19:14:42,704 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 19:14:42,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.330sec 2023-07-18 19:14:42,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 19:14:42,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 19:14:42,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 19:14:42,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43617,1689707677179-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 19:14:42,711 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43617,1689707677179-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 19:14:42,723 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 19:14:42,750 DEBUG [Listener at localhost/40787] zookeeper.ReadOnlyZKClient(139): Connect 0x37ac0919 to 127.0.0.1:62147 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:14:42,755 DEBUG [Listener at localhost/40787] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f52b832, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:14:42,771 DEBUG [hconnection-0x5f700c8a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:42,787 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55124, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:42,800 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:14:42,802 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:42,815 DEBUG [Listener at localhost/40787] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 19:14:42,819 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36768, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 19:14:42,842 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 19:14:42,843 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:14:42,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 19:14:42,851 DEBUG [Listener at localhost/40787] zookeeper.ReadOnlyZKClient(139): Connect 0x3f04a498 to 127.0.0.1:62147 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:14:42,859 DEBUG [Listener at localhost/40787] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4cec13a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:14:42,860 INFO [Listener at localhost/40787] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62147 2023-07-18 19:14:42,869 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:14:42,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10179db857e000a connected 2023-07-18 19:14:42,920 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=423, OpenFileDescriptor=685, MaxFileDescriptor=60000, SystemLoadAverage=394, ProcessCount=173, AvailableMemoryMB=3941 2023-07-18 19:14:42,924 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-18 19:14:42,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:42,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:43,006 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 19:14:43,024 INFO [Listener at localhost/40787] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:14:43,024 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:43,025 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:43,025 INFO [Listener at localhost/40787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:14:43,025 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:14:43,025 INFO [Listener at localhost/40787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:14:43,025 INFO [Listener at localhost/40787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:14:43,030 INFO [Listener at localhost/40787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44751 2023-07-18 19:14:43,031 INFO [Listener at localhost/40787] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:14:43,036 DEBUG [Listener at localhost/40787] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:14:43,038 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:43,046 INFO [Listener at localhost/40787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:14:43,049 INFO [Listener at localhost/40787] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44751 connecting to ZooKeeper ensemble=127.0.0.1:62147 2023-07-18 19:14:43,057 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:447510x0, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:14:43,059 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44751-0x10179db857e000b connected 2023-07-18 19:14:43,060 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(162): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 19:14:43,061 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(162): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 19:14:43,062 DEBUG [Listener at localhost/40787] zookeeper.ZKUtil(164): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:14:43,063 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44751 2023-07-18 19:14:43,066 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44751 2023-07-18 19:14:43,070 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44751 2023-07-18 19:14:43,070 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44751 2023-07-18 19:14:43,070 DEBUG [Listener at localhost/40787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44751 2023-07-18 19:14:43,074 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:14:43,074 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:14:43,074 INFO [Listener at localhost/40787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:14:43,075 INFO [Listener at localhost/40787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:14:43,075 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:14:43,075 INFO [Listener at localhost/40787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:14:43,075 INFO [Listener at localhost/40787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:14:43,076 INFO [Listener at localhost/40787] http.HttpServer(1146): Jetty bound to port 36151 2023-07-18 19:14:43,076 INFO [Listener at localhost/40787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:14:43,080 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:43,080 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3652f836{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:14:43,080 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:43,080 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e42e83e{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:14:43,091 INFO [Listener at localhost/40787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:14:43,091 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:14:43,092 INFO [Listener at localhost/40787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:14:43,092 INFO [Listener at localhost/40787] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:14:43,093 INFO [Listener at localhost/40787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:14:43,094 INFO [Listener at localhost/40787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@422d8bf2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:14:43,098 INFO [Listener at localhost/40787] server.AbstractConnector(333): Started ServerConnector@3e629c81{HTTP/1.1, (http/1.1)}{0.0.0.0:36151} 2023-07-18 19:14:43,098 INFO [Listener at localhost/40787] server.Server(415): Started @11801ms 2023-07-18 19:14:43,101 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(951): ClusterId : fb571e25-a6b4-4dee-a3ee-d614c0515106 2023-07-18 19:14:43,101 DEBUG [RS:3;jenkins-hbase4:44751] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:14:43,104 DEBUG [RS:3;jenkins-hbase4:44751] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:14:43,104 DEBUG [RS:3;jenkins-hbase4:44751] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:14:43,108 DEBUG [RS:3;jenkins-hbase4:44751] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:14:43,110 DEBUG [RS:3;jenkins-hbase4:44751] zookeeper.ReadOnlyZKClient(139): Connect 0x3270d993 to 127.0.0.1:62147 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:14:43,132 DEBUG [RS:3;jenkins-hbase4:44751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3cb721ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:14:43,133 DEBUG [RS:3;jenkins-hbase4:44751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3888b2e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:14:43,146 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:44751 2023-07-18 19:14:43,146 INFO [RS:3;jenkins-hbase4:44751] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:14:43,146 INFO [RS:3;jenkins-hbase4:44751] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:14:43,146 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:14:43,147 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43617,1689707677179 with isa=jenkins-hbase4.apache.org/172.31.14.131:44751, startcode=1689707683024 2023-07-18 19:14:43,147 DEBUG [RS:3;jenkins-hbase4:44751] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:14:43,154 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58109, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:14:43,154 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43617] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,154 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:14:43,155 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3 2023-07-18 19:14:43,155 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44967 2023-07-18 19:14:43,155 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33409 2023-07-18 19:14:43,161 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:14:43,161 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:14:43,161 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:14:43,161 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:14:43,162 DEBUG [RS:3;jenkins-hbase4:44751] zookeeper.ZKUtil(162): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,162 WARN [RS:3;jenkins-hbase4:44751] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:14:43,163 INFO [RS:3;jenkins-hbase4:44751] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:14:43,163 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:43,163 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,163 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:43,163 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:43,164 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:43,164 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:43,164 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 19:14:43,164 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44751,1689707683024] 2023-07-18 19:14:43,164 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:43,165 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:43,165 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:43,185 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43617,1689707677179] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 19:14:43,186 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:43,186 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:43,185 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,193 DEBUG [RS:3;jenkins-hbase4:44751] zookeeper.ZKUtil(162): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:43,194 DEBUG [RS:3;jenkins-hbase4:44751] zookeeper.ZKUtil(162): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:43,194 DEBUG [RS:3;jenkins-hbase4:44751] zookeeper.ZKUtil(162): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:43,195 DEBUG [RS:3;jenkins-hbase4:44751] zookeeper.ZKUtil(162): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,196 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:14:43,196 INFO [RS:3;jenkins-hbase4:44751] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:14:43,200 INFO [RS:3;jenkins-hbase4:44751] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:14:43,201 INFO [RS:3;jenkins-hbase4:44751] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:14:43,201 INFO [RS:3;jenkins-hbase4:44751] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:43,202 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:14:43,204 INFO [RS:3;jenkins-hbase4:44751] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:43,204 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,204 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,204 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,205 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,205 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,205 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:14:43,205 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,205 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,205 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,205 DEBUG [RS:3;jenkins-hbase4:44751] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:14:43,206 INFO [RS:3;jenkins-hbase4:44751] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:43,206 INFO [RS:3;jenkins-hbase4:44751] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:43,206 INFO [RS:3;jenkins-hbase4:44751] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:43,217 INFO [RS:3;jenkins-hbase4:44751] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:14:43,217 INFO [RS:3;jenkins-hbase4:44751] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44751,1689707683024-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:14:43,228 INFO [RS:3;jenkins-hbase4:44751] regionserver.Replication(203): jenkins-hbase4.apache.org,44751,1689707683024 started 2023-07-18 19:14:43,228 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44751,1689707683024, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44751, sessionid=0x10179db857e000b 2023-07-18 19:14:43,228 DEBUG [RS:3;jenkins-hbase4:44751] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:14:43,228 DEBUG [RS:3;jenkins-hbase4:44751] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,228 DEBUG [RS:3;jenkins-hbase4:44751] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44751,1689707683024' 2023-07-18 19:14:43,228 DEBUG [RS:3;jenkins-hbase4:44751] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:14:43,229 DEBUG [RS:3;jenkins-hbase4:44751] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:14:43,229 DEBUG [RS:3;jenkins-hbase4:44751] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:14:43,229 DEBUG [RS:3;jenkins-hbase4:44751] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:14:43,229 DEBUG [RS:3;jenkins-hbase4:44751] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:43,229 DEBUG [RS:3;jenkins-hbase4:44751] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44751,1689707683024' 2023-07-18 19:14:43,229 DEBUG [RS:3;jenkins-hbase4:44751] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:14:43,230 DEBUG [RS:3;jenkins-hbase4:44751] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:14:43,230 DEBUG [RS:3;jenkins-hbase4:44751] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:14:43,230 INFO [RS:3;jenkins-hbase4:44751] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 19:14:43,231 INFO [RS:3;jenkins-hbase4:44751] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 19:14:43,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:43,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:43,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:43,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:43,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:43,246 DEBUG [hconnection-0x394eed7c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:43,249 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55136, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:43,253 DEBUG [hconnection-0x394eed7c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:43,255 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48264, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:43,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:43,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:43,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:43,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:43,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:36768 deadline: 1689708883268, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:43,271 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:43,273 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:43,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:43,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:43,275 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:43,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:43,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:43,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:43,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:43,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:43,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:43,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:43,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:43,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:43,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:43,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:43,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:43,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561] to rsgroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:43,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:43,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:43,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:43,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:43,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(238): Moving server region 13ce679c9b6de2684bc3af2f72b426ea, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:43,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:43,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:43,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:43,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:43,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:43,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=13ce679c9b6de2684bc3af2f72b426ea, REOPEN/MOVE 2023-07-18 19:14:43,329 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=13ce679c9b6de2684bc3af2f72b426ea, REOPEN/MOVE 2023-07-18 19:14:43,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:43,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:43,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:43,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:43,331 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=13ce679c9b6de2684bc3af2f72b426ea, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:43,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:43,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:43,331 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707683331"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707683331"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707683331"}]},"ts":"1689707683331"} 2023-07-18 19:14:43,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 19:14:43,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(238): Moving server region 7af7ba814960ac41543f63d97428e575, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:43,334 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 19:14:43,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:43,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:43,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:43,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:43,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:43,335 INFO [RS:3;jenkins-hbase4:44751] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44751%2C1689707683024, suffix=, logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,44751,1689707683024, archiveDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs, maxLogs=32 2023-07-18 19:14:43,336 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36387,1689707679286, state=CLOSING 2023-07-18 19:14:43,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=7af7ba814960ac41543f63d97428e575, REOPEN/MOVE 2023-07-18 19:14:43,337 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 13ce679c9b6de2684bc3af2f72b426ea, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:43,337 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=7af7ba814960ac41543f63d97428e575, REOPEN/MOVE 2023-07-18 19:14:43,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-18 19:14:43,339 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=7af7ba814960ac41543f63d97428e575, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:43,339 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707683339"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707683339"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707683339"}]},"ts":"1689707683339"} 2023-07-18 19:14:43,344 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:14:43,344 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:14:43,346 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:43,348 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=14, state=RUNNABLE; CloseRegionProcedure 7af7ba814960ac41543f63d97428e575, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:43,351 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=17, ppid=14, state=RUNNABLE; CloseRegionProcedure 7af7ba814960ac41543f63d97428e575, server=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:43,369 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK] 2023-07-18 19:14:43,373 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK] 2023-07-18 19:14:43,373 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK] 2023-07-18 19:14:43,383 INFO [RS:3;jenkins-hbase4:44751] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,44751,1689707683024/jenkins-hbase4.apache.org%2C44751%2C1689707683024.1689707683336 2023-07-18 19:14:43,383 DEBUG [RS:3;jenkins-hbase4:44751] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK], DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK], DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK]] 2023-07-18 19:14:43,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:43,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 19:14:43,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 13ce679c9b6de2684bc3af2f72b426ea, disabling compactions & flushes 2023-07-18 19:14:43,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 19:14:43,502 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:43,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 19:14:43,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:43,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 19:14:43,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. after waiting 0 ms 2023-07-18 19:14:43,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 19:14:43,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:43,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 19:14:43,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 13ce679c9b6de2684bc3af2f72b426ea 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-18 19:14:43,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.21 KB heapSize=6.16 KB 2023-07-18 19:14:43,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/.tmp/info/e176bdace53b45688c2ed63857b38f42 2023-07-18 19:14:43,699 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.03 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/info/b301ff35b43c4ca3acd7df2d5e3cbb87 2023-07-18 19:14:43,775 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/.tmp/info/e176bdace53b45688c2ed63857b38f42 as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/info/e176bdace53b45688c2ed63857b38f42 2023-07-18 19:14:43,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/info/e176bdace53b45688c2ed63857b38f42, entries=2, sequenceid=6, filesize=4.8 K 2023-07-18 19:14:43,798 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 13ce679c9b6de2684bc3af2f72b426ea in 295ms, sequenceid=6, compaction requested=false 2023-07-18 19:14:43,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 19:14:43,806 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/table/901dcf925d0b4367b19d43b1cc4dfffb 2023-07-18 19:14:43,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-18 19:14:43,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:43,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 13ce679c9b6de2684bc3af2f72b426ea: 2023-07-18 19:14:43,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 13ce679c9b6de2684bc3af2f72b426ea move to jenkins-hbase4.apache.org,44751,1689707683024 record at close sequenceid=6 2023-07-18 19:14:43,821 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/info/b301ff35b43c4ca3acd7df2d5e3cbb87 as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/b301ff35b43c4ca3acd7df2d5e3cbb87 2023-07-18 19:14:43,822 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 13ce679c9b6de2684bc3af2f72b426ea, server=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:43,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:43,830 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/b301ff35b43c4ca3acd7df2d5e3cbb87, entries=22, sequenceid=16, filesize=7.3 K 2023-07-18 19:14:43,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/table/901dcf925d0b4367b19d43b1cc4dfffb as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/901dcf925d0b4367b19d43b1cc4dfffb 2023-07-18 19:14:43,845 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/901dcf925d0b4367b19d43b1cc4dfffb, entries=4, sequenceid=16, filesize=4.8 K 2023-07-18 19:14:43,852 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.21 KB/3290, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 349ms, sequenceid=16, compaction requested=false 2023-07-18 19:14:43,852 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 19:14:43,888 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-18 19:14:43,890 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:14:43,890 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 19:14:43,891 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 19:14:43,891 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,41417,1689707679207 record at close sequenceid=16 2023-07-18 19:14:43,895 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 19:14:43,896 WARN [PEWorker-3] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 19:14:43,899 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-18 19:14:43,899 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36387,1689707679286 in 550 msec 2023-07-18 19:14:43,902 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41417,1689707679207; forceNewPlan=false, retain=false 2023-07-18 19:14:44,052 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:14:44,052 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41417,1689707679207, state=OPENING 2023-07-18 19:14:44,055 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:14:44,055 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:14:44,055 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:14:44,209 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:44,209 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:14:44,214 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55194, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:14:44,220 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 19:14:44,221 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:14:44,224 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41417%2C1689707679207.meta, suffix=.meta, logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,41417,1689707679207, archiveDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs, maxLogs=32 2023-07-18 19:14:44,251 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK] 2023-07-18 19:14:44,256 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK] 2023-07-18 19:14:44,262 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK] 2023-07-18 19:14:44,265 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,41417,1689707679207/jenkins-hbase4.apache.org%2C41417%2C1689707679207.meta.1689707684225.meta 2023-07-18 19:14:44,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK], DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK], DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK]] 2023-07-18 19:14:44,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:44,265 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:14:44,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 19:14:44,266 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 19:14:44,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 19:14:44,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:44,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 19:14:44,266 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 19:14:44,272 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 19:14:44,274 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info 2023-07-18 19:14:44,274 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info 2023-07-18 19:14:44,275 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 19:14:44,287 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/b301ff35b43c4ca3acd7df2d5e3cbb87 2023-07-18 19:14:44,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:44,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 19:14:44,292 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:14:44,292 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:14:44,292 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 19:14:44,293 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:44,294 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 19:14:44,295 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table 2023-07-18 19:14:44,295 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table 2023-07-18 19:14:44,296 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 19:14:44,313 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/901dcf925d0b4367b19d43b1cc4dfffb 2023-07-18 19:14:44,313 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:44,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740 2023-07-18 19:14:44,323 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740 2023-07-18 19:14:44,328 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 19:14:44,331 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 19:14:44,336 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10027267360, jitterRate=-0.06613795459270477}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 19:14:44,337 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 19:14:44,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-18 19:14:44,353 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689707684209 2023-07-18 19:14:44,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 19:14:44,360 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 19:14:44,360 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41417,1689707679207, state=OPEN 2023-07-18 19:14:44,362 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:14:44,362 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:14:44,365 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=13ce679c9b6de2684bc3af2f72b426ea, regionState=CLOSED 2023-07-18 19:14:44,365 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707684365"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707684365"}]},"ts":"1689707684365"} 2023-07-18 19:14:44,367 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36387] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 217 connection: 172.31.14.131:55112 deadline: 1689707744366, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41417 startCode=1689707679207. As of locationSeqNum=16. 2023-07-18 19:14:44,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=13 2023-07-18 19:14:44,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41417,1689707679207 in 307 msec 2023-07-18 19:14:44,372 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 1.0390 sec 2023-07-18 19:14:44,468 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:44,472 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55204, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:44,479 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-18 19:14:44,479 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; CloseRegionProcedure 13ce679c9b6de2684bc3af2f72b426ea, server=jenkins-hbase4.apache.org,36387,1689707679286 in 1.1380 sec 2023-07-18 19:14:44,480 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=13ce679c9b6de2684bc3af2f72b426ea, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:14:44,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7af7ba814960ac41543f63d97428e575, disabling compactions & flushes 2023-07-18 19:14:44,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:44,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:44,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. after waiting 0 ms 2023-07-18 19:14:44,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:44,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7af7ba814960ac41543f63d97428e575 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-18 19:14:44,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/.tmp/m/b6a15fadc4794f8dbfd0329ebfed50c4 2023-07-18 19:14:44,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/.tmp/m/b6a15fadc4794f8dbfd0329ebfed50c4 as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m/b6a15fadc4794f8dbfd0329ebfed50c4 2023-07-18 19:14:44,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m/b6a15fadc4794f8dbfd0329ebfed50c4, entries=3, sequenceid=9, filesize=5.2 K 2023-07-18 19:14:44,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for 7af7ba814960ac41543f63d97428e575 in 85ms, sequenceid=9, compaction requested=false 2023-07-18 19:14:44,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 19:14:44,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-18 19:14:44,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:14:44,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:44,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7af7ba814960ac41543f63d97428e575: 2023-07-18 19:14:44,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7af7ba814960ac41543f63d97428e575 move to jenkins-hbase4.apache.org,44751,1689707683024 record at close sequenceid=9 2023-07-18 19:14:44,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,621 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=7af7ba814960ac41543f63d97428e575, regionState=CLOSED 2023-07-18 19:14:44,621 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707684621"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707684621"}]},"ts":"1689707684621"} 2023-07-18 19:14:44,628 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=14 2023-07-18 19:14:44,628 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=14, state=SUCCESS; CloseRegionProcedure 7af7ba814960ac41543f63d97428e575, server=jenkins-hbase4.apache.org,39561,1689707679120 in 1.2760 sec 2023-07-18 19:14:44,629 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=7af7ba814960ac41543f63d97428e575, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:14:44,629 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 19:14:44,629 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=13ce679c9b6de2684bc3af2f72b426ea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:44,629 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707684629"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707684629"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707684629"}]},"ts":"1689707684629"} 2023-07-18 19:14:44,631 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=7af7ba814960ac41543f63d97428e575, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:44,632 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707684631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707684631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707684631"}]},"ts":"1689707684631"} 2023-07-18 19:14:44,632 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=12, state=RUNNABLE; OpenRegionProcedure 13ce679c9b6de2684bc3af2f72b426ea, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:44,634 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=14, state=RUNNABLE; OpenRegionProcedure 7af7ba814960ac41543f63d97428e575, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:44,789 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:44,789 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:14:44,794 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43534, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:14:44,805 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:44,805 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 13ce679c9b6de2684bc3af2f72b426ea, NAME => 'hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:44,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:44,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:44,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:44,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:44,821 INFO [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:44,825 DEBUG [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/info 2023-07-18 19:14:44,826 DEBUG [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/info 2023-07-18 19:14:44,826 INFO [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 13ce679c9b6de2684bc3af2f72b426ea columnFamilyName info 2023-07-18 19:14:44,840 DEBUG [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/info/e176bdace53b45688c2ed63857b38f42 2023-07-18 19:14:44,840 INFO [StoreOpener-13ce679c9b6de2684bc3af2f72b426ea-1] regionserver.HStore(310): Store=13ce679c9b6de2684bc3af2f72b426ea/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:44,842 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:44,844 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:44,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:14:44,849 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 13ce679c9b6de2684bc3af2f72b426ea; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11016740800, jitterRate=0.026013940572738647}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:44,850 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 13ce679c9b6de2684bc3af2f72b426ea: 2023-07-18 19:14:44,851 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea., pid=19, masterSystemTime=1689707684789 2023-07-18 19:14:44,855 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:44,856 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:14:44,856 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:44,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7af7ba814960ac41543f63d97428e575, NAME => 'hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:44,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:14:44,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. service=MultiRowMutationService 2023-07-18 19:14:44,857 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 19:14:44,857 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=13ce679c9b6de2684bc3af2f72b426ea, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:44,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:44,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,857 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707684856"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707684856"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707684856"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707684856"}]},"ts":"1689707684856"} 2023-07-18 19:14:44,859 INFO [StoreOpener-7af7ba814960ac41543f63d97428e575-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,864 DEBUG [StoreOpener-7af7ba814960ac41543f63d97428e575-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m 2023-07-18 19:14:44,864 DEBUG [StoreOpener-7af7ba814960ac41543f63d97428e575-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m 2023-07-18 19:14:44,864 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=12 2023-07-18 19:14:44,865 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=12, state=SUCCESS; OpenRegionProcedure 13ce679c9b6de2684bc3af2f72b426ea, server=jenkins-hbase4.apache.org,44751,1689707683024 in 228 msec 2023-07-18 19:14:44,865 INFO [StoreOpener-7af7ba814960ac41543f63d97428e575-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7af7ba814960ac41543f63d97428e575 columnFamilyName m 2023-07-18 19:14:44,870 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=13ce679c9b6de2684bc3af2f72b426ea, REOPEN/MOVE in 1.5380 sec 2023-07-18 19:14:44,883 DEBUG [StoreOpener-7af7ba814960ac41543f63d97428e575-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m/b6a15fadc4794f8dbfd0329ebfed50c4 2023-07-18 19:14:44,883 INFO [StoreOpener-7af7ba814960ac41543f63d97428e575-1] regionserver.HStore(310): Store=7af7ba814960ac41543f63d97428e575/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:44,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,886 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,890 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7af7ba814960ac41543f63d97428e575 2023-07-18 19:14:44,891 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7af7ba814960ac41543f63d97428e575; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6de2baa3, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:44,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7af7ba814960ac41543f63d97428e575: 2023-07-18 19:14:44,892 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575., pid=20, masterSystemTime=1689707684789 2023-07-18 19:14:44,895 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:44,895 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:14:44,896 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=7af7ba814960ac41543f63d97428e575, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:44,896 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707684896"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707684896"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707684896"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707684896"}]},"ts":"1689707684896"} 2023-07-18 19:14:44,902 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=14 2023-07-18 19:14:44,902 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=14, state=SUCCESS; OpenRegionProcedure 7af7ba814960ac41543f63d97428e575, server=jenkins-hbase4.apache.org,44751,1689707683024 in 265 msec 2023-07-18 19:14:44,904 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=7af7ba814960ac41543f63d97428e575, REOPEN/MOVE in 1.5670 sec 2023-07-18 19:14:45,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120] are moved back to default 2023-07-18 19:14:45,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:45,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:45,343 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39561] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:48264 deadline: 1689707745343, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44751 startCode=1689707683024. As of locationSeqNum=9. 2023-07-18 19:14:45,447 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36387] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:55136 deadline: 1689707745447, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41417 startCode=1689707679207. As of locationSeqNum=16. 2023-07-18 19:14:45,549 DEBUG [hconnection-0x394eed7c-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:45,552 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55218, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:45,569 DEBUG [hconnection-0x394eed7c-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:45,571 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43538, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:45,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:45,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:45,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:45,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:45,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:45,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:45,602 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:14:45,605 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39561] ipc.CallRunner(144): callId: 50 service: ClientService methodName: ExecService size: 622 connection: 172.31.14.131:48256 deadline: 1689707745605, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44751 startCode=1689707683024. As of locationSeqNum=9. 2023-07-18 19:14:45,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-18 19:14:45,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 19:14:45,710 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:45,717 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43544, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:45,721 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:45,722 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:45,723 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:45,723 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:45,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 19:14:45,732 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:14:45,743 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:45,750 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:45,750 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 empty. 2023-07-18 19:14:45,750 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:45,751 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:45,751 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:45,754 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:45,754 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 empty. 2023-07-18 19:14:45,755 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:45,755 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 empty. 2023-07-18 19:14:45,759 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 empty. 2023-07-18 19:14:45,759 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:45,759 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 empty. 2023-07-18 19:14:45,760 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:45,760 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:45,760 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 19:14:45,814 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:45,818 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:45,818 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 979a485b795602cf9e48f56b65b3d294, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:45,823 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 7046d9ca224f8458b78dab74ca1af4e8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:45,934 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:45,934 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 7046d9ca224f8458b78dab74ca1af4e8, disabling compactions & flushes 2023-07-18 19:14:45,934 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:45,934 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:45,934 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. after waiting 0 ms 2023-07-18 19:14:45,934 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:45,934 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:45,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 19:14:45,937 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 7046d9ca224f8458b78dab74ca1af4e8: 2023-07-18 19:14:45,938 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 791eab80de5619734a50e541f7ad3cc4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:45,941 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:45,941 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, disabling compactions & flushes 2023-07-18 19:14:45,941 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:45,941 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:45,941 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. after waiting 0 ms 2023-07-18 19:14:45,941 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:45,942 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:45,942 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9: 2023-07-18 19:14:45,942 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 10d7a1838640529f37749e967c70d2c1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:45,973 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:45,982 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 979a485b795602cf9e48f56b65b3d294, disabling compactions & flushes 2023-07-18 19:14:45,982 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:45,982 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:45,982 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. after waiting 0 ms 2023-07-18 19:14:45,982 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:45,982 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:45,982 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 979a485b795602cf9e48f56b65b3d294: 2023-07-18 19:14:46,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:46,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 791eab80de5619734a50e541f7ad3cc4, disabling compactions & flushes 2023-07-18 19:14:46,016 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:46,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:46,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. after waiting 0 ms 2023-07-18 19:14:46,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:46,016 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:46,016 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 791eab80de5619734a50e541f7ad3cc4: 2023-07-18 19:14:46,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:46,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 10d7a1838640529f37749e967c70d2c1, disabling compactions & flushes 2023-07-18 19:14:46,034 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:46,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:46,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. after waiting 0 ms 2023-07-18 19:14:46,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:46,034 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:46,034 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 10d7a1838640529f37749e967c70d2c1: 2023-07-18 19:14:46,039 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:14:46,041 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686041"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707686041"}]},"ts":"1689707686041"} 2023-07-18 19:14:46,041 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707686041"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707686041"}]},"ts":"1689707686041"} 2023-07-18 19:14:46,041 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686041"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707686041"}]},"ts":"1689707686041"} 2023-07-18 19:14:46,042 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686041"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707686041"}]},"ts":"1689707686041"} 2023-07-18 19:14:46,042 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707686041"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707686041"}]},"ts":"1689707686041"} 2023-07-18 19:14:46,108 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 19:14:46,112 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:14:46,113 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707686112"}]},"ts":"1689707686112"} 2023-07-18 19:14:46,116 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 19:14:46,120 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:46,121 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:46,121 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:46,121 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:46,121 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, ASSIGN}] 2023-07-18 19:14:46,124 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, ASSIGN 2023-07-18 19:14:46,124 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, ASSIGN 2023-07-18 19:14:46,125 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, ASSIGN 2023-07-18 19:14:46,126 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, ASSIGN 2023-07-18 19:14:46,127 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:14:46,127 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41417,1689707679207; forceNewPlan=false, retain=false 2023-07-18 19:14:46,130 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41417,1689707679207; forceNewPlan=false, retain=false 2023-07-18 19:14:46,130 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, ASSIGN 2023-07-18 19:14:46,130 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:14:46,136 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41417,1689707679207; forceNewPlan=false, retain=false 2023-07-18 19:14:46,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 19:14:46,280 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 19:14:46,285 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7046d9ca224f8458b78dab74ca1af4e8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,285 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=10d7a1838640529f37749e967c70d2c1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,285 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:46,285 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686284"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686284"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686284"}]},"ts":"1689707686284"} 2023-07-18 19:14:46,285 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707686284"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686284"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686284"}]},"ts":"1689707686284"} 2023-07-18 19:14:46,285 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707686284"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686284"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686284"}]},"ts":"1689707686284"} 2023-07-18 19:14:46,285 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=791eab80de5619734a50e541f7ad3cc4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:46,285 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=979a485b795602cf9e48f56b65b3d294, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,286 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686284"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686284"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686284"}]},"ts":"1689707686284"} 2023-07-18 19:14:46,286 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686285"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686285"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686285"}]},"ts":"1689707686285"} 2023-07-18 19:14:46,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=24, state=RUNNABLE; OpenRegionProcedure 7046d9ca224f8458b78dab74ca1af4e8, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:14:46,298 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=26, state=RUNNABLE; OpenRegionProcedure 10d7a1838640529f37749e967c70d2c1, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:14:46,300 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=22, state=RUNNABLE; OpenRegionProcedure 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:46,301 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=25, state=RUNNABLE; OpenRegionProcedure 791eab80de5619734a50e541f7ad3cc4, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:46,304 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=23, state=RUNNABLE; OpenRegionProcedure 979a485b795602cf9e48f56b65b3d294, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:14:46,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:46,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 10d7a1838640529f37749e967c70d2c1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 19:14:46,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:46,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:46,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 791eab80de5619734a50e541f7ad3cc4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 19:14:46,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:46,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:46,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:46,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:46,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:46,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:46,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:46,478 INFO [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:46,479 INFO [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:46,481 DEBUG [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/f 2023-07-18 19:14:46,481 DEBUG [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/f 2023-07-18 19:14:46,482 INFO [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 10d7a1838640529f37749e967c70d2c1 columnFamilyName f 2023-07-18 19:14:46,482 DEBUG [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/f 2023-07-18 19:14:46,482 DEBUG [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/f 2023-07-18 19:14:46,482 INFO [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] regionserver.HStore(310): Store=10d7a1838640529f37749e967c70d2c1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:46,485 INFO [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 791eab80de5619734a50e541f7ad3cc4 columnFamilyName f 2023-07-18 19:14:46,488 INFO [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] regionserver.HStore(310): Store=791eab80de5619734a50e541f7ad3cc4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:46,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:46,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:46,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:46,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:46,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:46,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:46,511 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 791eab80de5619734a50e541f7ad3cc4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10827546080, jitterRate=0.008393809199333191}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:46,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 791eab80de5619734a50e541f7ad3cc4: 2023-07-18 19:14:46,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4., pid=30, masterSystemTime=1689707686456 2023-07-18 19:14:46,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:46,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:46,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:46,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:46,516 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=791eab80de5619734a50e541f7ad3cc4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:46,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 19:14:46,517 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686516"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707686516"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707686516"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707686516"}]},"ts":"1689707686516"} 2023-07-18 19:14:46,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:46,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:46,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:46,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:46,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:46,521 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 10d7a1838640529f37749e967c70d2c1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11371505760, jitterRate=0.05905400216579437}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:46,521 INFO [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:46,521 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 10d7a1838640529f37749e967c70d2c1: 2023-07-18 19:14:46,523 DEBUG [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/f 2023-07-18 19:14:46,523 DEBUG [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/f 2023-07-18 19:14:46,524 INFO [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 columnFamilyName f 2023-07-18 19:14:46,525 INFO [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] regionserver.HStore(310): Store=3dbc3f4421fd1efc7f8f8b5b7f70a7f9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:46,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:46,528 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1., pid=28, masterSystemTime=1689707686443 2023-07-18 19:14:46,528 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=25 2023-07-18 19:14:46,529 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=25, state=SUCCESS; OpenRegionProcedure 791eab80de5619734a50e541f7ad3cc4, server=jenkins-hbase4.apache.org,44751,1689707683024 in 220 msec 2023-07-18 19:14:46,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:46,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:46,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:46,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:46,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7046d9ca224f8458b78dab74ca1af4e8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 19:14:46,533 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, ASSIGN in 408 msec 2023-07-18 19:14:46,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:46,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:46,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:46,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:46,533 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=10d7a1838640529f37749e967c70d2c1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,534 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707686533"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707686533"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707686533"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707686533"}]},"ts":"1689707686533"} 2023-07-18 19:14:46,535 INFO [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:46,538 DEBUG [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/f 2023-07-18 19:14:46,538 DEBUG [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/f 2023-07-18 19:14:46,539 INFO [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7046d9ca224f8458b78dab74ca1af4e8 columnFamilyName f 2023-07-18 19:14:46,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:46,540 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=26 2023-07-18 19:14:46,542 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=26, state=SUCCESS; OpenRegionProcedure 10d7a1838640529f37749e967c70d2c1, server=jenkins-hbase4.apache.org,41417,1689707679207 in 238 msec 2023-07-18 19:14:46,542 INFO [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] regionserver.HStore(310): Store=7046d9ca224f8458b78dab74ca1af4e8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:46,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:46,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:46,546 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, ASSIGN in 419 msec 2023-07-18 19:14:46,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:46,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3dbc3f4421fd1efc7f8f8b5b7f70a7f9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9696831520, jitterRate=-0.0969121903181076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:46,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9: 2023-07-18 19:14:46,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:46,556 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9., pid=29, masterSystemTime=1689707686456 2023-07-18 19:14:46,561 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:46,561 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707686561"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707686561"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707686561"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707686561"}]},"ts":"1689707686561"} 2023-07-18 19:14:46,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:46,564 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:46,567 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=22 2023-07-18 19:14:46,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:46,568 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=22, state=SUCCESS; OpenRegionProcedure 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, server=jenkins-hbase4.apache.org,44751,1689707683024 in 264 msec 2023-07-18 19:14:46,570 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7046d9ca224f8458b78dab74ca1af4e8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10615927360, jitterRate=-0.011314719915390015}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:46,570 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7046d9ca224f8458b78dab74ca1af4e8: 2023-07-18 19:14:46,575 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8., pid=27, masterSystemTime=1689707686443 2023-07-18 19:14:46,598 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, ASSIGN in 447 msec 2023-07-18 19:14:46,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:46,600 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:46,601 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 979a485b795602cf9e48f56b65b3d294, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 19:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:46,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:46,615 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=7046d9ca224f8458b78dab74ca1af4e8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,616 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686607"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707686607"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707686607"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707686607"}]},"ts":"1689707686607"} 2023-07-18 19:14:46,616 INFO [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:46,645 DEBUG [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/f 2023-07-18 19:14:46,645 DEBUG [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/f 2023-07-18 19:14:46,647 INFO [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 979a485b795602cf9e48f56b65b3d294 columnFamilyName f 2023-07-18 19:14:46,664 INFO [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] regionserver.HStore(310): Store=979a485b795602cf9e48f56b65b3d294/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:46,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:46,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:46,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:46,681 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=24 2023-07-18 19:14:46,681 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; OpenRegionProcedure 7046d9ca224f8458b78dab74ca1af4e8, server=jenkins-hbase4.apache.org,41417,1689707679207 in 355 msec 2023-07-18 19:14:46,691 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, ASSIGN in 561 msec 2023-07-18 19:14:46,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:46,701 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 979a485b795602cf9e48f56b65b3d294; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9954222560, jitterRate=-0.07294078171253204}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:46,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 979a485b795602cf9e48f56b65b3d294: 2023-07-18 19:14:46,702 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294., pid=31, masterSystemTime=1689707686443 2023-07-18 19:14:46,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:46,705 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:46,705 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=979a485b795602cf9e48f56b65b3d294, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,706 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686705"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707686705"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707686705"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707686705"}]},"ts":"1689707686705"} 2023-07-18 19:14:46,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=23 2023-07-18 19:14:46,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=23, state=SUCCESS; OpenRegionProcedure 979a485b795602cf9e48f56b65b3d294, server=jenkins-hbase4.apache.org,41417,1689707679207 in 404 msec 2023-07-18 19:14:46,714 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=21 2023-07-18 19:14:46,715 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, ASSIGN in 590 msec 2023-07-18 19:14:46,716 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:14:46,717 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707686717"}]},"ts":"1689707686717"} 2023-07-18 19:14:46,719 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 19:14:46,722 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:14:46,725 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.1260 sec 2023-07-18 19:14:46,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 19:14:46,752 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-18 19:14:46,752 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-18 19:14:46,753 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:46,767 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36387] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:55124 deadline: 1689707746767, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41417 startCode=1689707679207. As of locationSeqNum=16. 2023-07-18 19:14:46,870 DEBUG [hconnection-0x5f700c8a-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:46,873 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55224, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:46,884 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-18 19:14:46,885 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:46,886 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-18 19:14:46,886 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:46,893 DEBUG [Listener at localhost/40787] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:14:46,895 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55148, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:14:46,897 DEBUG [Listener at localhost/40787] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:14:46,900 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48278, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:14:46,900 DEBUG [Listener at localhost/40787] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:14:46,902 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55232, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:14:46,904 DEBUG [Listener at localhost/40787] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:14:46,906 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43552, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:14:46,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:46,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:14:46,920 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:46,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:46,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:46,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:46,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:46,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:46,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:46,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:46,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, REOPEN/MOVE 2023-07-18 19:14:46,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 979a485b795602cf9e48f56b65b3d294 to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,940 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, REOPEN/MOVE 2023-07-18 19:14:46,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:46,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:46,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:46,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:46,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:46,942 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:46,942 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707686942"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686942"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686942"}]},"ts":"1689707686942"} 2023-07-18 19:14:46,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, REOPEN/MOVE 2023-07-18 19:14:46,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 7046d9ca224f8458b78dab74ca1af4e8 to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,943 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, REOPEN/MOVE 2023-07-18 19:14:46,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:46,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:46,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:46,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:46,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:46,944 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=979a485b795602cf9e48f56b65b3d294, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,944 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686944"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686944"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686944"}]},"ts":"1689707686944"} 2023-07-18 19:14:46,945 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:46,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, REOPEN/MOVE 2023-07-18 19:14:46,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 791eab80de5619734a50e541f7ad3cc4 to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,946 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, REOPEN/MOVE 2023-07-18 19:14:46,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:46,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:46,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:46,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:46,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:46,948 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=7046d9ca224f8458b78dab74ca1af4e8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure 979a485b795602cf9e48f56b65b3d294, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:14:46,949 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686948"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686948"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686948"}]},"ts":"1689707686948"} 2023-07-18 19:14:46,951 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=34, state=RUNNABLE; CloseRegionProcedure 7046d9ca224f8458b78dab74ca1af4e8, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:14:46,952 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, REOPEN/MOVE 2023-07-18 19:14:46,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 10d7a1838640529f37749e967c70d2c1 to RSGroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:46,954 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, REOPEN/MOVE 2023-07-18 19:14:46,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:46,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:46,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:46,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:46,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:46,956 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=791eab80de5619734a50e541f7ad3cc4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:46,956 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707686956"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686956"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686956"}]},"ts":"1689707686956"} 2023-07-18 19:14:46,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, REOPEN/MOVE 2023-07-18 19:14:46,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1065181980, current retry=0 2023-07-18 19:14:46,958 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, REOPEN/MOVE 2023-07-18 19:14:46,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=36, state=RUNNABLE; CloseRegionProcedure 791eab80de5619734a50e541f7ad3cc4, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:46,960 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=10d7a1838640529f37749e967c70d2c1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:14:46,961 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707686960"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707686960"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707686960"}]},"ts":"1689707686960"} 2023-07-18 19:14:46,963 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=39, state=RUNNABLE; CloseRegionProcedure 10d7a1838640529f37749e967c70d2c1, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:14:47,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, disabling compactions & flushes 2023-07-18 19:14:47,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:47,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:47,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. after waiting 0 ms 2023-07-18 19:14:47,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:47,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7046d9ca224f8458b78dab74ca1af4e8, disabling compactions & flushes 2023-07-18 19:14:47,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:47,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:47,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. after waiting 0 ms 2023-07-18 19:14:47,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:47,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:47,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:47,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9: 2023-07-18 19:14:47,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 move to jenkins-hbase4.apache.org,39561,1689707679120 record at close sequenceid=2 2023-07-18 19:14:47,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:47,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:47,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7046d9ca224f8458b78dab74ca1af4e8: 2023-07-18 19:14:47,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7046d9ca224f8458b78dab74ca1af4e8 move to jenkins-hbase4.apache.org,39561,1689707679120 record at close sequenceid=2 2023-07-18 19:14:47,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 791eab80de5619734a50e541f7ad3cc4, disabling compactions & flushes 2023-07-18 19:14:47,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:47,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:47,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. after waiting 0 ms 2023-07-18 19:14:47,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:47,128 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, regionState=CLOSED 2023-07-18 19:14:47,128 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707687128"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707687128"}]},"ts":"1689707687128"} 2023-07-18 19:14:47,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,130 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=7046d9ca224f8458b78dab74ca1af4e8, regionState=CLOSED 2023-07-18 19:14:47,130 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687130"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707687130"}]},"ts":"1689707687130"} 2023-07-18 19:14:47,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 979a485b795602cf9e48f56b65b3d294, disabling compactions & flushes 2023-07-18 19:14:47,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:47,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:47,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. after waiting 0 ms 2023-07-18 19:14:47,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:47,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:47,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:47,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 791eab80de5619734a50e541f7ad3cc4: 2023-07-18 19:14:47,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 791eab80de5619734a50e541f7ad3cc4 move to jenkins-hbase4.apache.org,36387,1689707679286 record at close sequenceid=2 2023-07-18 19:14:47,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-18 19:14:47,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, server=jenkins-hbase4.apache.org,44751,1689707683024 in 188 msec 2023-07-18 19:14:47,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,141 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39561,1689707679120; forceNewPlan=false, retain=false 2023-07-18 19:14:47,141 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=791eab80de5619734a50e541f7ad3cc4, regionState=CLOSED 2023-07-18 19:14:47,141 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687141"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707687141"}]},"ts":"1689707687141"} 2023-07-18 19:14:47,141 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=34 2023-07-18 19:14:47,141 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=34, state=SUCCESS; CloseRegionProcedure 7046d9ca224f8458b78dab74ca1af4e8, server=jenkins-hbase4.apache.org,41417,1689707679207 in 184 msec 2023-07-18 19:14:47,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:47,143 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:47,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 979a485b795602cf9e48f56b65b3d294: 2023-07-18 19:14:47,143 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 979a485b795602cf9e48f56b65b3d294 move to jenkins-hbase4.apache.org,36387,1689707679286 record at close sequenceid=2 2023-07-18 19:14:47,146 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39561,1689707679120; forceNewPlan=false, retain=false 2023-07-18 19:14:47,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,153 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 10d7a1838640529f37749e967c70d2c1, disabling compactions & flushes 2023-07-18 19:14:47,153 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:47,153 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:47,153 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. after waiting 0 ms 2023-07-18 19:14:47,153 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:47,154 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=979a485b795602cf9e48f56b65b3d294, regionState=CLOSED 2023-07-18 19:14:47,154 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687154"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707687154"}]},"ts":"1689707687154"} 2023-07-18 19:14:47,155 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=36 2023-07-18 19:14:47,155 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=36, state=SUCCESS; CloseRegionProcedure 791eab80de5619734a50e541f7ad3cc4, server=jenkins-hbase4.apache.org,44751,1689707683024 in 187 msec 2023-07-18 19:14:47,157 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:47,160 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-18 19:14:47,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure 979a485b795602cf9e48f56b65b3d294, server=jenkins-hbase4.apache.org,41417,1689707679207 in 208 msec 2023-07-18 19:14:47,162 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:47,164 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 19:14:47,168 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:47,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:47,169 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 10d7a1838640529f37749e967c70d2c1: 2023-07-18 19:14:47,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 10d7a1838640529f37749e967c70d2c1 move to jenkins-hbase4.apache.org,36387,1689707679286 record at close sequenceid=2 2023-07-18 19:14:47,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,173 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=10d7a1838640529f37749e967c70d2c1, regionState=CLOSED 2023-07-18 19:14:47,173 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707687173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707687173"}]},"ts":"1689707687173"} 2023-07-18 19:14:47,180 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=39 2023-07-18 19:14:47,180 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=39, state=SUCCESS; CloseRegionProcedure 10d7a1838640529f37749e967c70d2c1, server=jenkins-hbase4.apache.org,41417,1689707679207 in 212 msec 2023-07-18 19:14:47,182 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:47,291 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 19:14:47,292 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=10d7a1838640529f37749e967c70d2c1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:47,292 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:47,292 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=7046d9ca224f8458b78dab74ca1af4e8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:47,292 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707687292"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707687292"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707687292"}]},"ts":"1689707687292"} 2023-07-18 19:14:47,292 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687292"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707687292"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707687292"}]},"ts":"1689707687292"} 2023-07-18 19:14:47,292 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=979a485b795602cf9e48f56b65b3d294, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:47,292 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687292"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707687292"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707687292"}]},"ts":"1689707687292"} 2023-07-18 19:14:47,292 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707687292"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707687292"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707687292"}]},"ts":"1689707687292"} 2023-07-18 19:14:47,292 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=791eab80de5619734a50e541f7ad3cc4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:47,293 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687292"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707687292"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707687292"}]},"ts":"1689707687292"} 2023-07-18 19:14:47,295 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=32, state=RUNNABLE; OpenRegionProcedure 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:47,298 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=34, state=RUNNABLE; OpenRegionProcedure 7046d9ca224f8458b78dab74ca1af4e8, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:47,301 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=33, state=RUNNABLE; OpenRegionProcedure 979a485b795602cf9e48f56b65b3d294, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:47,302 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=39, state=RUNNABLE; OpenRegionProcedure 10d7a1838640529f37749e967c70d2c1, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:47,303 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=36, state=RUNNABLE; OpenRegionProcedure 791eab80de5619734a50e541f7ad3cc4, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:47,374 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 19:14:47,375 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 19:14:47,375 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:14:47,375 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 19:14:47,376 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 19:14:47,376 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 19:14:47,455 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:47,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 19:14:47,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:47,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,458 INFO [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,459 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:47,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 10d7a1838640529f37749e967c70d2c1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 19:14:47,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:47,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,461 DEBUG [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/f 2023-07-18 19:14:47,461 DEBUG [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/f 2023-07-18 19:14:47,461 INFO [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 columnFamilyName f 2023-07-18 19:14:47,462 INFO [StoreOpener-3dbc3f4421fd1efc7f8f8b5b7f70a7f9-1] regionserver.HStore(310): Store=3dbc3f4421fd1efc7f8f8b5b7f70a7f9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:47,463 INFO [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,466 DEBUG [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/f 2023-07-18 19:14:47,466 DEBUG [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/f 2023-07-18 19:14:47,466 INFO [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 10d7a1838640529f37749e967c70d2c1 columnFamilyName f 2023-07-18 19:14:47,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,467 INFO [StoreOpener-10d7a1838640529f37749e967c70d2c1-1] regionserver.HStore(310): Store=10d7a1838640529f37749e967c70d2c1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:47,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:47,477 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3dbc3f4421fd1efc7f8f8b5b7f70a7f9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9794624480, jitterRate=-0.08780451118946075}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:47,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9: 2023-07-18 19:14:47,482 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9., pid=42, masterSystemTime=1689707687448 2023-07-18 19:14:47,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:47,484 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 10d7a1838640529f37749e967c70d2c1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11546889600, jitterRate=0.07538789510726929}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:47,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 10d7a1838640529f37749e967c70d2c1: 2023-07-18 19:14:47,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:47,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:47,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:47,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7046d9ca224f8458b78dab74ca1af4e8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 19:14:47,485 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1., pid=45, masterSystemTime=1689707687454 2023-07-18 19:14:47,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:47,485 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:47,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,486 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707687485"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707687485"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707687485"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707687485"}]},"ts":"1689707687485"} 2023-07-18 19:14:47,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:47,487 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:47,487 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:47,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 791eab80de5619734a50e541f7ad3cc4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 19:14:47,488 INFO [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:47,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,489 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=10d7a1838640529f37749e967c70d2c1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:47,490 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707687489"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707687489"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707687489"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707687489"}]},"ts":"1689707687489"} 2023-07-18 19:14:47,491 DEBUG [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/f 2023-07-18 19:14:47,491 DEBUG [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/f 2023-07-18 19:14:47,492 INFO [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7046d9ca224f8458b78dab74ca1af4e8 columnFamilyName f 2023-07-18 19:14:47,492 INFO [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,492 INFO [StoreOpener-7046d9ca224f8458b78dab74ca1af4e8-1] regionserver.HStore(310): Store=7046d9ca224f8458b78dab74ca1af4e8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:47,498 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=32 2023-07-18 19:14:47,498 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=32, state=SUCCESS; OpenRegionProcedure 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, server=jenkins-hbase4.apache.org,39561,1689707679120 in 194 msec 2023-07-18 19:14:47,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,501 DEBUG [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/f 2023-07-18 19:14:47,501 DEBUG [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/f 2023-07-18 19:14:47,502 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, REOPEN/MOVE in 560 msec 2023-07-18 19:14:47,502 INFO [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 791eab80de5619734a50e541f7ad3cc4 columnFamilyName f 2023-07-18 19:14:47,502 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=39 2023-07-18 19:14:47,502 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=39, state=SUCCESS; OpenRegionProcedure 10d7a1838640529f37749e967c70d2c1, server=jenkins-hbase4.apache.org,36387,1689707679286 in 196 msec 2023-07-18 19:14:47,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,503 INFO [StoreOpener-791eab80de5619734a50e541f7ad3cc4-1] regionserver.HStore(310): Store=791eab80de5619734a50e541f7ad3cc4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:47,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,505 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, REOPEN/MOVE in 547 msec 2023-07-18 19:14:47,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:47,509 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7046d9ca224f8458b78dab74ca1af4e8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10109774880, jitterRate=-0.058453842997550964}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:47,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7046d9ca224f8458b78dab74ca1af4e8: 2023-07-18 19:14:47,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:47,511 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8., pid=43, masterSystemTime=1689707687448 2023-07-18 19:14:47,512 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 791eab80de5619734a50e541f7ad3cc4; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9915817760, jitterRate=-0.0765175074338913}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:47,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 791eab80de5619734a50e541f7ad3cc4: 2023-07-18 19:14:47,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:47,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:47,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4., pid=46, masterSystemTime=1689707687454 2023-07-18 19:14:47,514 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=7046d9ca224f8458b78dab74ca1af4e8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:47,514 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687514"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707687514"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707687514"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707687514"}]},"ts":"1689707687514"} 2023-07-18 19:14:47,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:47,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:47,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:47,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 979a485b795602cf9e48f56b65b3d294, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 19:14:47,517 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=791eab80de5619734a50e541f7ad3cc4, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:47,517 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687517"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707687517"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707687517"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707687517"}]},"ts":"1689707687517"} 2023-07-18 19:14:47,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:47,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,521 INFO [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,521 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=34 2023-07-18 19:14:47,522 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=34, state=SUCCESS; OpenRegionProcedure 7046d9ca224f8458b78dab74ca1af4e8, server=jenkins-hbase4.apache.org,39561,1689707679120 in 218 msec 2023-07-18 19:14:47,523 DEBUG [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/f 2023-07-18 19:14:47,523 DEBUG [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/f 2023-07-18 19:14:47,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=36 2023-07-18 19:14:47,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=36, state=SUCCESS; OpenRegionProcedure 791eab80de5619734a50e541f7ad3cc4, server=jenkins-hbase4.apache.org,36387,1689707679286 in 217 msec 2023-07-18 19:14:47,524 INFO [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 979a485b795602cf9e48f56b65b3d294 columnFamilyName f 2023-07-18 19:14:47,524 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, REOPEN/MOVE in 578 msec 2023-07-18 19:14:47,525 INFO [StoreOpener-979a485b795602cf9e48f56b65b3d294-1] regionserver.HStore(310): Store=979a485b795602cf9e48f56b65b3d294/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:47,526 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, REOPEN/MOVE in 576 msec 2023-07-18 19:14:47,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:47,535 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 979a485b795602cf9e48f56b65b3d294; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11412310560, jitterRate=0.06285424530506134}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:47,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 979a485b795602cf9e48f56b65b3d294: 2023-07-18 19:14:47,536 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294., pid=44, masterSystemTime=1689707687454 2023-07-18 19:14:47,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:47,538 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:47,539 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=979a485b795602cf9e48f56b65b3d294, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:47,539 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707687539"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707687539"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707687539"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707687539"}]},"ts":"1689707687539"} 2023-07-18 19:14:47,544 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=33 2023-07-18 19:14:47,544 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=33, state=SUCCESS; OpenRegionProcedure 979a485b795602cf9e48f56b65b3d294, server=jenkins-hbase4.apache.org,36387,1689707679286 in 242 msec 2023-07-18 19:14:47,546 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, REOPEN/MOVE in 603 msec 2023-07-18 19:14:47,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-18 19:14:47,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1065181980. 2023-07-18 19:14:47,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:47,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:47,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:47,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:47,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:14:47,969 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:47,975 INFO [Listener at localhost/40787] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:47,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:47,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:47,996 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707687996"}]},"ts":"1689707687996"} 2023-07-18 19:14:47,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-18 19:14:47,999 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 19:14:48,001 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 19:14:48,007 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, UNASSIGN}] 2023-07-18 19:14:48,009 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, UNASSIGN 2023-07-18 19:14:48,009 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, UNASSIGN 2023-07-18 19:14:48,009 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, UNASSIGN 2023-07-18 19:14:48,010 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, UNASSIGN 2023-07-18 19:14:48,010 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, UNASSIGN 2023-07-18 19:14:48,012 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=979a485b795602cf9e48f56b65b3d294, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,012 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688012"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688012"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688012"}]},"ts":"1689707688012"} 2023-07-18 19:14:48,012 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:48,013 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=7046d9ca224f8458b78dab74ca1af4e8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:48,013 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688012"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688012"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688012"}]},"ts":"1689707688012"} 2023-07-18 19:14:48,013 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=10d7a1838640529f37749e967c70d2c1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,013 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=791eab80de5619734a50e541f7ad3cc4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,013 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688013"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688013"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688013"}]},"ts":"1689707688013"} 2023-07-18 19:14:48,013 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688012"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688012"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688012"}]},"ts":"1689707688012"} 2023-07-18 19:14:48,013 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688012"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688012"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688012"}]},"ts":"1689707688012"} 2023-07-18 19:14:48,014 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=49, state=RUNNABLE; CloseRegionProcedure 979a485b795602cf9e48f56b65b3d294, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:48,016 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=48, state=RUNNABLE; CloseRegionProcedure 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:48,017 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=52, state=RUNNABLE; CloseRegionProcedure 10d7a1838640529f37749e967c70d2c1, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:48,019 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=51, state=RUNNABLE; CloseRegionProcedure 791eab80de5619734a50e541f7ad3cc4, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:48,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=50, state=RUNNABLE; CloseRegionProcedure 7046d9ca224f8458b78dab74ca1af4e8, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:48,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-18 19:14:48,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:48,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 979a485b795602cf9e48f56b65b3d294, disabling compactions & flushes 2023-07-18 19:14:48,170 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:48,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:48,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. after waiting 0 ms 2023-07-18 19:14:48,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:48,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:48,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, disabling compactions & flushes 2023-07-18 19:14:48,173 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:48,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:48,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. after waiting 0 ms 2023-07-18 19:14:48,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:48,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:14:48,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294. 2023-07-18 19:14:48,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 979a485b795602cf9e48f56b65b3d294: 2023-07-18 19:14:48,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:14:48,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9. 2023-07-18 19:14:48,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3dbc3f4421fd1efc7f8f8b5b7f70a7f9: 2023-07-18 19:14:48,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:48,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:48,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 10d7a1838640529f37749e967c70d2c1, disabling compactions & flushes 2023-07-18 19:14:48,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:48,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:48,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. after waiting 0 ms 2023-07-18 19:14:48,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:48,187 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=979a485b795602cf9e48f56b65b3d294, regionState=CLOSED 2023-07-18 19:14:48,188 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688187"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688187"}]},"ts":"1689707688187"} 2023-07-18 19:14:48,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:48,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:48,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7046d9ca224f8458b78dab74ca1af4e8, disabling compactions & flushes 2023-07-18 19:14:48,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:48,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:48,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. after waiting 0 ms 2023-07-18 19:14:48,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:48,192 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, regionState=CLOSED 2023-07-18 19:14:48,192 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688192"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688192"}]},"ts":"1689707688192"} 2023-07-18 19:14:48,195 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:14:48,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1. 2023-07-18 19:14:48,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 10d7a1838640529f37749e967c70d2c1: 2023-07-18 19:14:48,199 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=49 2023-07-18 19:14:48,200 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; CloseRegionProcedure 979a485b795602cf9e48f56b65b3d294, server=jenkins-hbase4.apache.org,36387,1689707679286 in 178 msec 2023-07-18 19:14:48,200 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:48,200 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:48,202 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 791eab80de5619734a50e541f7ad3cc4, disabling compactions & flushes 2023-07-18 19:14:48,202 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:48,202 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:48,202 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. after waiting 0 ms 2023-07-18 19:14:48,202 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:48,202 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:14:48,203 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=48 2023-07-18 19:14:48,203 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=48, state=SUCCESS; CloseRegionProcedure 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, server=jenkins-hbase4.apache.org,39561,1689707679120 in 178 msec 2023-07-18 19:14:48,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8. 2023-07-18 19:14:48,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7046d9ca224f8458b78dab74ca1af4e8: 2023-07-18 19:14:48,205 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=10d7a1838640529f37749e967c70d2c1, regionState=CLOSED 2023-07-18 19:14:48,205 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=979a485b795602cf9e48f56b65b3d294, UNASSIGN in 196 msec 2023-07-18 19:14:48,205 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688205"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688205"}]},"ts":"1689707688205"} 2023-07-18 19:14:48,207 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3dbc3f4421fd1efc7f8f8b5b7f70a7f9, UNASSIGN in 200 msec 2023-07-18 19:14:48,208 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:48,209 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=7046d9ca224f8458b78dab74ca1af4e8, regionState=CLOSED 2023-07-18 19:14:48,209 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688209"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688209"}]},"ts":"1689707688209"} 2023-07-18 19:14:48,212 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=52 2023-07-18 19:14:48,212 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; CloseRegionProcedure 10d7a1838640529f37749e967c70d2c1, server=jenkins-hbase4.apache.org,36387,1689707679286 in 190 msec 2023-07-18 19:14:48,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:14:48,215 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4. 2023-07-18 19:14:48,215 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=10d7a1838640529f37749e967c70d2c1, UNASSIGN in 205 msec 2023-07-18 19:14:48,215 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 791eab80de5619734a50e541f7ad3cc4: 2023-07-18 19:14:48,215 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=50 2023-07-18 19:14:48,216 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=50, state=SUCCESS; CloseRegionProcedure 7046d9ca224f8458b78dab74ca1af4e8, server=jenkins-hbase4.apache.org,39561,1689707679120 in 191 msec 2023-07-18 19:14:48,217 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:48,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7046d9ca224f8458b78dab74ca1af4e8, UNASSIGN in 213 msec 2023-07-18 19:14:48,250 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=791eab80de5619734a50e541f7ad3cc4, regionState=CLOSED 2023-07-18 19:14:48,250 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688218"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688218"}]},"ts":"1689707688218"} 2023-07-18 19:14:48,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=51 2023-07-18 19:14:48,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=51, state=SUCCESS; CloseRegionProcedure 791eab80de5619734a50e541f7ad3cc4, server=jenkins-hbase4.apache.org,36387,1689707679286 in 234 msec 2023-07-18 19:14:48,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=47 2023-07-18 19:14:48,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=791eab80de5619734a50e541f7ad3cc4, UNASSIGN in 254 msec 2023-07-18 19:14:48,260 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707688260"}]},"ts":"1689707688260"} 2023-07-18 19:14:48,262 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 19:14:48,266 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 19:14:48,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 284 msec 2023-07-18 19:14:48,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-18 19:14:48,300 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-18 19:14:48,301 INFO [Listener at localhost/40787] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:48,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:48,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-18 19:14:48,317 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-18 19:14:48,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-18 19:14:48,329 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:48,329 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:48,329 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:48,329 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:48,329 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:48,333 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/recovered.edits] 2023-07-18 19:14:48,333 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/recovered.edits] 2023-07-18 19:14:48,333 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/recovered.edits] 2023-07-18 19:14:48,333 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/recovered.edits] 2023-07-18 19:14:48,334 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/recovered.edits] 2023-07-18 19:14:48,349 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/recovered.edits/7.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8/recovered.edits/7.seqid 2023-07-18 19:14:48,349 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/recovered.edits/7.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294/recovered.edits/7.seqid 2023-07-18 19:14:48,351 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7046d9ca224f8458b78dab74ca1af4e8 2023-07-18 19:14:48,351 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/979a485b795602cf9e48f56b65b3d294 2023-07-18 19:14:48,351 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/recovered.edits/7.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9/recovered.edits/7.seqid 2023-07-18 19:14:48,352 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/recovered.edits/7.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4/recovered.edits/7.seqid 2023-07-18 19:14:48,353 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/recovered.edits/7.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1/recovered.edits/7.seqid 2023-07-18 19:14:48,353 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3dbc3f4421fd1efc7f8f8b5b7f70a7f9 2023-07-18 19:14:48,354 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/791eab80de5619734a50e541f7ad3cc4 2023-07-18 19:14:48,354 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/10d7a1838640529f37749e967c70d2c1 2023-07-18 19:14:48,354 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 19:14:48,384 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 19:14:48,387 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 19:14:48,388 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 19:14:48,388 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707688388"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:48,388 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707688388"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:48,388 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707688388"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:48,388 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707688388"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:48,388 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707688388"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:48,391 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 19:14:48,391 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3dbc3f4421fd1efc7f8f8b5b7f70a7f9, NAME => 'Group_testTableMoveTruncateAndDrop,,1689707685593.3dbc3f4421fd1efc7f8f8b5b7f70a7f9.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 979a485b795602cf9e48f56b65b3d294, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689707685593.979a485b795602cf9e48f56b65b3d294.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 7046d9ca224f8458b78dab74ca1af4e8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707685593.7046d9ca224f8458b78dab74ca1af4e8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 791eab80de5619734a50e541f7ad3cc4, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707685593.791eab80de5619734a50e541f7ad3cc4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 10d7a1838640529f37749e967c70d2c1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689707685593.10d7a1838640529f37749e967c70d2c1.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 19:14:48,391 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 19:14:48,391 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689707688391"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:48,393 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 19:14:48,404 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442 2023-07-18 19:14:48,404 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,405 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,405 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,405 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,405 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442 empty. 2023-07-18 19:14:48,406 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540 empty. 2023-07-18 19:14:48,406 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b empty. 2023-07-18 19:14:48,406 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384 empty. 2023-07-18 19:14:48,406 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c empty. 2023-07-18 19:14:48,407 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442 2023-07-18 19:14:48,407 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,407 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,407 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,407 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,408 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 19:14:48,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-18 19:14:48,430 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:48,436 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8121815f1bec085dbaed762fc99d3b0b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:48,436 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => d972304bebf2eb1fdad5297d7f5ac540, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:48,436 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 23af306975862af2e820131b523ff442, NAME => 'Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:48,467 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,467 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8121815f1bec085dbaed762fc99d3b0b, disabling compactions & flushes 2023-07-18 19:14:48,467 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:48,467 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:48,467 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. after waiting 0 ms 2023-07-18 19:14:48,467 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:48,467 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:48,467 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8121815f1bec085dbaed762fc99d3b0b: 2023-07-18 19:14:48,468 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => c5949bd5f0d16ce28baef5650f80994c, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:48,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 23af306975862af2e820131b523ff442, disabling compactions & flushes 2023-07-18 19:14:48,487 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing d972304bebf2eb1fdad5297d7f5ac540, disabling compactions & flushes 2023-07-18 19:14:48,487 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:48,488 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:48,487 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:48,488 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. after waiting 0 ms 2023-07-18 19:14:48,488 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:48,488 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. after waiting 0 ms 2023-07-18 19:14:48,488 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:48,488 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:48,488 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:48,488 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:48,488 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 23af306975862af2e820131b523ff442: 2023-07-18 19:14:48,488 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for d972304bebf2eb1fdad5297d7f5ac540: 2023-07-18 19:14:48,489 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => d96d660e8471c8146aea91ceab762384, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:48,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,508 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing c5949bd5f0d16ce28baef5650f80994c, disabling compactions & flushes 2023-07-18 19:14:48,508 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:48,509 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:48,509 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. after waiting 0 ms 2023-07-18 19:14:48,509 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:48,509 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:48,509 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for c5949bd5f0d16ce28baef5650f80994c: 2023-07-18 19:14:48,513 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,513 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing d96d660e8471c8146aea91ceab762384, disabling compactions & flushes 2023-07-18 19:14:48,513 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:48,513 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:48,513 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. after waiting 0 ms 2023-07-18 19:14:48,513 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:48,513 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:48,513 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for d96d660e8471c8146aea91ceab762384: 2023-07-18 19:14:48,517 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688517"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688517"}]},"ts":"1689707688517"} 2023-07-18 19:14:48,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688517"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688517"}]},"ts":"1689707688517"} 2023-07-18 19:14:48,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688517"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688517"}]},"ts":"1689707688517"} 2023-07-18 19:14:48,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688517"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688517"}]},"ts":"1689707688517"} 2023-07-18 19:14:48,518 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688517"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707688517"}]},"ts":"1689707688517"} 2023-07-18 19:14:48,523 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 19:14:48,524 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707688524"}]},"ts":"1689707688524"} 2023-07-18 19:14:48,526 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 19:14:48,531 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:48,531 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:48,532 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:48,532 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:48,532 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23af306975862af2e820131b523ff442, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8121815f1bec085dbaed762fc99d3b0b, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d972304bebf2eb1fdad5297d7f5ac540, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5949bd5f0d16ce28baef5650f80994c, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d96d660e8471c8146aea91ceab762384, ASSIGN}] 2023-07-18 19:14:48,534 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23af306975862af2e820131b523ff442, ASSIGN 2023-07-18 19:14:48,534 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8121815f1bec085dbaed762fc99d3b0b, ASSIGN 2023-07-18 19:14:48,535 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d972304bebf2eb1fdad5297d7f5ac540, ASSIGN 2023-07-18 19:14:48,535 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5949bd5f0d16ce28baef5650f80994c, ASSIGN 2023-07-18 19:14:48,536 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23af306975862af2e820131b523ff442, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39561,1689707679120; forceNewPlan=false, retain=false 2023-07-18 19:14:48,536 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d96d660e8471c8146aea91ceab762384, ASSIGN 2023-07-18 19:14:48,536 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8121815f1bec085dbaed762fc99d3b0b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:48,537 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d972304bebf2eb1fdad5297d7f5ac540, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:48,538 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5949bd5f0d16ce28baef5650f80994c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39561,1689707679120; forceNewPlan=false, retain=false 2023-07-18 19:14:48,538 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d96d660e8471c8146aea91ceab762384, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:48,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-18 19:14:48,686 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 19:14:48,690 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=c5949bd5f0d16ce28baef5650f80994c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:48,690 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=d96d660e8471c8146aea91ceab762384, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,690 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=8121815f1bec085dbaed762fc99d3b0b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,690 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=23af306975862af2e820131b523ff442, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:48,690 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688690"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688690"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688690"}]},"ts":"1689707688690"} 2023-07-18 19:14:48,690 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=d972304bebf2eb1fdad5297d7f5ac540, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,690 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688690"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688690"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688690"}]},"ts":"1689707688690"} 2023-07-18 19:14:48,690 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688690"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688690"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688690"}]},"ts":"1689707688690"} 2023-07-18 19:14:48,690 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688690"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688690"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688690"}]},"ts":"1689707688690"} 2023-07-18 19:14:48,690 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688690"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707688690"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707688690"}]},"ts":"1689707688690"} 2023-07-18 19:14:48,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=60, state=RUNNABLE; OpenRegionProcedure 8121815f1bec085dbaed762fc99d3b0b, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:48,693 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=59, state=RUNNABLE; OpenRegionProcedure 23af306975862af2e820131b523ff442, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:48,699 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=63, state=RUNNABLE; OpenRegionProcedure d96d660e8471c8146aea91ceab762384, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:48,703 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=61, state=RUNNABLE; OpenRegionProcedure d972304bebf2eb1fdad5297d7f5ac540, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:48,704 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=62, state=RUNNABLE; OpenRegionProcedure c5949bd5f0d16ce28baef5650f80994c, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:48,855 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:48,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d96d660e8471c8146aea91ceab762384, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 19:14:48,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,860 INFO [StoreOpener-d96d660e8471c8146aea91ceab762384-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,860 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:48,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c5949bd5f0d16ce28baef5650f80994c, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 19:14:48,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,863 DEBUG [StoreOpener-d96d660e8471c8146aea91ceab762384-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384/f 2023-07-18 19:14:48,863 DEBUG [StoreOpener-d96d660e8471c8146aea91ceab762384-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384/f 2023-07-18 19:14:48,863 INFO [StoreOpener-d96d660e8471c8146aea91ceab762384-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d96d660e8471c8146aea91ceab762384 columnFamilyName f 2023-07-18 19:14:48,864 INFO [StoreOpener-d96d660e8471c8146aea91ceab762384-1] regionserver.HStore(310): Store=d96d660e8471c8146aea91ceab762384/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:48,867 INFO [StoreOpener-c5949bd5f0d16ce28baef5650f80994c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,869 DEBUG [StoreOpener-c5949bd5f0d16ce28baef5650f80994c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c/f 2023-07-18 19:14:48,869 DEBUG [StoreOpener-c5949bd5f0d16ce28baef5650f80994c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c/f 2023-07-18 19:14:48,869 INFO [StoreOpener-c5949bd5f0d16ce28baef5650f80994c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c5949bd5f0d16ce28baef5650f80994c columnFamilyName f 2023-07-18 19:14:48,871 INFO [StoreOpener-c5949bd5f0d16ce28baef5650f80994c-1] regionserver.HStore(310): Store=c5949bd5f0d16ce28baef5650f80994c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:48,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:48,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:48,875 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d96d660e8471c8146aea91ceab762384; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11071617920, jitterRate=0.031124770641326904}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:48,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d96d660e8471c8146aea91ceab762384: 2023-07-18 19:14:48,876 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384., pid=66, masterSystemTime=1689707688850 2023-07-18 19:14:48,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:48,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:48,878 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:48,878 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:48,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8121815f1bec085dbaed762fc99d3b0b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 19:14:48,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,880 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=d96d660e8471c8146aea91ceab762384, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,880 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688880"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707688880"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707688880"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707688880"}]},"ts":"1689707688880"} 2023-07-18 19:14:48,883 INFO [StoreOpener-8121815f1bec085dbaed762fc99d3b0b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,883 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:48,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c5949bd5f0d16ce28baef5650f80994c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10055354720, jitterRate=-0.06352211534976959}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:48,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c5949bd5f0d16ce28baef5650f80994c: 2023-07-18 19:14:48,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c., pid=68, masterSystemTime=1689707688850 2023-07-18 19:14:48,886 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=63 2023-07-18 19:14:48,887 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; OpenRegionProcedure d96d660e8471c8146aea91ceab762384, server=jenkins-hbase4.apache.org,36387,1689707679286 in 184 msec 2023-07-18 19:14:48,889 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=c5949bd5f0d16ce28baef5650f80994c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:48,889 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688889"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707688889"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707688889"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707688889"}]},"ts":"1689707688889"} 2023-07-18 19:14:48,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:48,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:48,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:48,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 23af306975862af2e820131b523ff442, NAME => 'Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 19:14:48,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 23af306975862af2e820131b523ff442 2023-07-18 19:14:48,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 23af306975862af2e820131b523ff442 2023-07-18 19:14:48,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 23af306975862af2e820131b523ff442 2023-07-18 19:14:48,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d96d660e8471c8146aea91ceab762384, ASSIGN in 355 msec 2023-07-18 19:14:48,893 DEBUG [StoreOpener-8121815f1bec085dbaed762fc99d3b0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b/f 2023-07-18 19:14:48,893 DEBUG [StoreOpener-8121815f1bec085dbaed762fc99d3b0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b/f 2023-07-18 19:14:48,894 INFO [StoreOpener-8121815f1bec085dbaed762fc99d3b0b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8121815f1bec085dbaed762fc99d3b0b columnFamilyName f 2023-07-18 19:14:48,894 INFO [StoreOpener-8121815f1bec085dbaed762fc99d3b0b-1] regionserver.HStore(310): Store=8121815f1bec085dbaed762fc99d3b0b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:48,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=62 2023-07-18 19:14:48,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=62, state=SUCCESS; OpenRegionProcedure c5949bd5f0d16ce28baef5650f80994c, server=jenkins-hbase4.apache.org,39561,1689707679120 in 187 msec 2023-07-18 19:14:48,896 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5949bd5f0d16ce28baef5650f80994c, ASSIGN in 363 msec 2023-07-18 19:14:48,897 INFO [StoreOpener-23af306975862af2e820131b523ff442-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 23af306975862af2e820131b523ff442 2023-07-18 19:14:48,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:48,903 DEBUG [StoreOpener-23af306975862af2e820131b523ff442-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442/f 2023-07-18 19:14:48,903 DEBUG [StoreOpener-23af306975862af2e820131b523ff442-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442/f 2023-07-18 19:14:48,904 INFO [StoreOpener-23af306975862af2e820131b523ff442-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 23af306975862af2e820131b523ff442 columnFamilyName f 2023-07-18 19:14:48,905 INFO [StoreOpener-23af306975862af2e820131b523ff442-1] regionserver.HStore(310): Store=23af306975862af2e820131b523ff442/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:48,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442 2023-07-18 19:14:48,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442 2023-07-18 19:14:48,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:48,919 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8121815f1bec085dbaed762fc99d3b0b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10204696960, jitterRate=-0.04961353540420532}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:48,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8121815f1bec085dbaed762fc99d3b0b: 2023-07-18 19:14:48,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-18 19:14:48,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b., pid=64, masterSystemTime=1689707688850 2023-07-18 19:14:48,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 23af306975862af2e820131b523ff442 2023-07-18 19:14:48,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:48,927 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:48,927 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:48,927 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=8121815f1bec085dbaed762fc99d3b0b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,927 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688927"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707688927"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707688927"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707688927"}]},"ts":"1689707688927"} 2023-07-18 19:14:48,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d972304bebf2eb1fdad5297d7f5ac540, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 19:14:48,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:48,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,932 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=60 2023-07-18 19:14:48,932 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; OpenRegionProcedure 8121815f1bec085dbaed762fc99d3b0b, server=jenkins-hbase4.apache.org,36387,1689707679286 in 238 msec 2023-07-18 19:14:48,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8121815f1bec085dbaed762fc99d3b0b, ASSIGN in 401 msec 2023-07-18 19:14:48,944 INFO [StoreOpener-d972304bebf2eb1fdad5297d7f5ac540-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:48,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 23af306975862af2e820131b523ff442; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11251657760, jitterRate=0.047892287373542786}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:48,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 23af306975862af2e820131b523ff442: 2023-07-18 19:14:48,947 DEBUG [StoreOpener-d972304bebf2eb1fdad5297d7f5ac540-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540/f 2023-07-18 19:14:48,947 DEBUG [StoreOpener-d972304bebf2eb1fdad5297d7f5ac540-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540/f 2023-07-18 19:14:48,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442., pid=65, masterSystemTime=1689707688850 2023-07-18 19:14:48,947 INFO [StoreOpener-d972304bebf2eb1fdad5297d7f5ac540-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d972304bebf2eb1fdad5297d7f5ac540 columnFamilyName f 2023-07-18 19:14:48,948 INFO [StoreOpener-d972304bebf2eb1fdad5297d7f5ac540-1] regionserver.HStore(310): Store=d972304bebf2eb1fdad5297d7f5ac540/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:48,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:48,953 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:48,953 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=23af306975862af2e820131b523ff442, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:48,953 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707688953"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707688953"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707688953"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707688953"}]},"ts":"1689707688953"} 2023-07-18 19:14:48,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:48,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:48,961 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d972304bebf2eb1fdad5297d7f5ac540; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10642684800, jitterRate=-0.008822739124298096}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:48,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d972304bebf2eb1fdad5297d7f5ac540: 2023-07-18 19:14:48,963 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540., pid=67, masterSystemTime=1689707688850 2023-07-18 19:14:48,963 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=59 2023-07-18 19:14:48,963 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=59, state=SUCCESS; OpenRegionProcedure 23af306975862af2e820131b523ff442, server=jenkins-hbase4.apache.org,39561,1689707679120 in 265 msec 2023-07-18 19:14:48,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:48,965 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:48,965 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23af306975862af2e820131b523ff442, ASSIGN in 431 msec 2023-07-18 19:14:48,966 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=d972304bebf2eb1fdad5297d7f5ac540, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:48,966 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707688966"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707688966"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707688966"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707688966"}]},"ts":"1689707688966"} 2023-07-18 19:14:48,970 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=61 2023-07-18 19:14:48,973 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=61, state=SUCCESS; OpenRegionProcedure d972304bebf2eb1fdad5297d7f5ac540, server=jenkins-hbase4.apache.org,36387,1689707679286 in 268 msec 2023-07-18 19:14:48,974 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=58 2023-07-18 19:14:48,974 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d972304bebf2eb1fdad5297d7f5ac540, ASSIGN in 438 msec 2023-07-18 19:14:48,974 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707688974"}]},"ts":"1689707688974"} 2023-07-18 19:14:48,976 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 19:14:48,979 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-18 19:14:48,981 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 671 msec 2023-07-18 19:14:49,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-18 19:14:49,425 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-18 19:14:49,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:49,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:49,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:49,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:49,429 INFO [Listener at localhost/40787] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-18 19:14:49,434 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707689434"}]},"ts":"1689707689434"} 2023-07-18 19:14:49,435 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 19:14:49,437 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 19:14:49,438 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23af306975862af2e820131b523ff442, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8121815f1bec085dbaed762fc99d3b0b, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d972304bebf2eb1fdad5297d7f5ac540, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5949bd5f0d16ce28baef5650f80994c, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d96d660e8471c8146aea91ceab762384, UNASSIGN}] 2023-07-18 19:14:49,440 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23af306975862af2e820131b523ff442, UNASSIGN 2023-07-18 19:14:49,440 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8121815f1bec085dbaed762fc99d3b0b, UNASSIGN 2023-07-18 19:14:49,440 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d972304bebf2eb1fdad5297d7f5ac540, UNASSIGN 2023-07-18 19:14:49,440 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d96d660e8471c8146aea91ceab762384, UNASSIGN 2023-07-18 19:14:49,441 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5949bd5f0d16ce28baef5650f80994c, UNASSIGN 2023-07-18 19:14:49,441 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=23af306975862af2e820131b523ff442, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:49,441 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707689441"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707689441"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707689441"}]},"ts":"1689707689441"} 2023-07-18 19:14:49,441 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=8121815f1bec085dbaed762fc99d3b0b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:49,441 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707689441"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707689441"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707689441"}]},"ts":"1689707689441"} 2023-07-18 19:14:49,442 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=d972304bebf2eb1fdad5297d7f5ac540, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:49,442 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=d96d660e8471c8146aea91ceab762384, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:49,442 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=c5949bd5f0d16ce28baef5650f80994c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:49,442 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707689442"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707689442"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707689442"}]},"ts":"1689707689442"} 2023-07-18 19:14:49,442 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707689442"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707689442"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707689442"}]},"ts":"1689707689442"} 2023-07-18 19:14:49,442 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707689442"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707689442"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707689442"}]},"ts":"1689707689442"} 2023-07-18 19:14:49,443 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=70, state=RUNNABLE; CloseRegionProcedure 23af306975862af2e820131b523ff442, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:49,444 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure 8121815f1bec085dbaed762fc99d3b0b, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:49,445 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=74, state=RUNNABLE; CloseRegionProcedure d96d660e8471c8146aea91ceab762384, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:49,446 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=72, state=RUNNABLE; CloseRegionProcedure d972304bebf2eb1fdad5297d7f5ac540, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:49,447 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=73, state=RUNNABLE; CloseRegionProcedure c5949bd5f0d16ce28baef5650f80994c, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:49,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-18 19:14:49,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:49,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c5949bd5f0d16ce28baef5650f80994c, disabling compactions & flushes 2023-07-18 19:14:49,597 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:49,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:49,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. after waiting 0 ms 2023-07-18 19:14:49,597 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:49,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:49,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d972304bebf2eb1fdad5297d7f5ac540, disabling compactions & flushes 2023-07-18 19:14:49,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:49,603 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:49,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:49,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. after waiting 0 ms 2023-07-18 19:14:49,603 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:49,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c. 2023-07-18 19:14:49,604 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c5949bd5f0d16ce28baef5650f80994c: 2023-07-18 19:14:49,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:49,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 23af306975862af2e820131b523ff442 2023-07-18 19:14:49,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 23af306975862af2e820131b523ff442, disabling compactions & flushes 2023-07-18 19:14:49,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:49,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:49,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. after waiting 0 ms 2023-07-18 19:14:49,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:49,608 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=c5949bd5f0d16ce28baef5650f80994c, regionState=CLOSED 2023-07-18 19:14:49,608 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707689608"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707689608"}]},"ts":"1689707689608"} 2023-07-18 19:14:49,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:49,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540. 2023-07-18 19:14:49,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d972304bebf2eb1fdad5297d7f5ac540: 2023-07-18 19:14:49,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:49,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:49,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d96d660e8471c8146aea91ceab762384, disabling compactions & flushes 2023-07-18 19:14:49,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:49,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:49,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. after waiting 0 ms 2023-07-18 19:14:49,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:49,614 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=73 2023-07-18 19:14:49,614 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=d972304bebf2eb1fdad5297d7f5ac540, regionState=CLOSED 2023-07-18 19:14:49,614 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=73, state=SUCCESS; CloseRegionProcedure c5949bd5f0d16ce28baef5650f80994c, server=jenkins-hbase4.apache.org,39561,1689707679120 in 163 msec 2023-07-18 19:14:49,614 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707689614"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707689614"}]},"ts":"1689707689614"} 2023-07-18 19:14:49,617 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c5949bd5f0d16ce28baef5650f80994c, UNASSIGN in 176 msec 2023-07-18 19:14:49,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:49,620 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:49,620 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=72 2023-07-18 19:14:49,621 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=72, state=SUCCESS; CloseRegionProcedure d972304bebf2eb1fdad5297d7f5ac540, server=jenkins-hbase4.apache.org,36387,1689707679286 in 170 msec 2023-07-18 19:14:49,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384. 2023-07-18 19:14:49,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d96d660e8471c8146aea91ceab762384: 2023-07-18 19:14:49,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442. 2023-07-18 19:14:49,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 23af306975862af2e820131b523ff442: 2023-07-18 19:14:49,623 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d972304bebf2eb1fdad5297d7f5ac540, UNASSIGN in 183 msec 2023-07-18 19:14:49,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:49,623 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:49,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8121815f1bec085dbaed762fc99d3b0b, disabling compactions & flushes 2023-07-18 19:14:49,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:49,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:49,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. after waiting 0 ms 2023-07-18 19:14:49,624 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:49,625 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=d96d660e8471c8146aea91ceab762384, regionState=CLOSED 2023-07-18 19:14:49,625 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707689624"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707689624"}]},"ts":"1689707689624"} 2023-07-18 19:14:49,625 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 23af306975862af2e820131b523ff442 2023-07-18 19:14:49,626 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=23af306975862af2e820131b523ff442, regionState=CLOSED 2023-07-18 19:14:49,627 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689707689626"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707689626"}]},"ts":"1689707689626"} 2023-07-18 19:14:49,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:49,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b. 2023-07-18 19:14:49,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8121815f1bec085dbaed762fc99d3b0b: 2023-07-18 19:14:49,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=74 2023-07-18 19:14:49,631 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=74, state=SUCCESS; CloseRegionProcedure d96d660e8471c8146aea91ceab762384, server=jenkins-hbase4.apache.org,36387,1689707679286 in 182 msec 2023-07-18 19:14:49,632 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=70 2023-07-18 19:14:49,632 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=70, state=SUCCESS; CloseRegionProcedure 23af306975862af2e820131b523ff442, server=jenkins-hbase4.apache.org,39561,1689707679120 in 185 msec 2023-07-18 19:14:49,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:49,633 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d96d660e8471c8146aea91ceab762384, UNASSIGN in 193 msec 2023-07-18 19:14:49,633 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=8121815f1bec085dbaed762fc99d3b0b, regionState=CLOSED 2023-07-18 19:14:49,633 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689707689633"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707689633"}]},"ts":"1689707689633"} 2023-07-18 19:14:49,634 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23af306975862af2e820131b523ff442, UNASSIGN in 194 msec 2023-07-18 19:14:49,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-18 19:14:49,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure 8121815f1bec085dbaed762fc99d3b0b, server=jenkins-hbase4.apache.org,36387,1689707679286 in 190 msec 2023-07-18 19:14:49,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=69 2023-07-18 19:14:49,643 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8121815f1bec085dbaed762fc99d3b0b, UNASSIGN in 203 msec 2023-07-18 19:14:49,644 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707689644"}]},"ts":"1689707689644"} 2023-07-18 19:14:49,645 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 19:14:49,647 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 19:14:49,649 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 218 msec 2023-07-18 19:14:49,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-18 19:14:49,737 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-18 19:14:49,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,751 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1065181980' 2023-07-18 19:14:49,752 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:49,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:49,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:49,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:49,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-18 19:14:49,766 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442 2023-07-18 19:14:49,766 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:49,766 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:49,766 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:49,766 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:49,770 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442/recovered.edits] 2023-07-18 19:14:49,770 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384/recovered.edits] 2023-07-18 19:14:49,771 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c/recovered.edits] 2023-07-18 19:14:49,772 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540/recovered.edits] 2023-07-18 19:14:49,772 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b/recovered.edits] 2023-07-18 19:14:49,781 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442/recovered.edits/4.seqid 2023-07-18 19:14:49,781 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384/recovered.edits/4.seqid 2023-07-18 19:14:49,781 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c/recovered.edits/4.seqid 2023-07-18 19:14:49,782 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540/recovered.edits/4.seqid 2023-07-18 19:14:49,782 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23af306975862af2e820131b523ff442 2023-07-18 19:14:49,783 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d96d660e8471c8146aea91ceab762384 2023-07-18 19:14:49,783 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c5949bd5f0d16ce28baef5650f80994c 2023-07-18 19:14:49,783 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d972304bebf2eb1fdad5297d7f5ac540 2023-07-18 19:14:49,784 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b/recovered.edits/4.seqid 2023-07-18 19:14:49,784 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8121815f1bec085dbaed762fc99d3b0b 2023-07-18 19:14:49,785 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 19:14:49,787 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,795 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 19:14:49,798 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 19:14:49,799 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,799 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 19:14:49,800 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707689799"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:49,800 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707689799"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:49,800 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707689799"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:49,800 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707689799"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:49,800 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707689799"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:49,803 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 19:14:49,803 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 23af306975862af2e820131b523ff442, NAME => 'Group_testTableMoveTruncateAndDrop,,1689707688356.23af306975862af2e820131b523ff442.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 8121815f1bec085dbaed762fc99d3b0b, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689707688356.8121815f1bec085dbaed762fc99d3b0b.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => d972304bebf2eb1fdad5297d7f5ac540, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689707688356.d972304bebf2eb1fdad5297d7f5ac540.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => c5949bd5f0d16ce28baef5650f80994c, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689707688356.c5949bd5f0d16ce28baef5650f80994c.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => d96d660e8471c8146aea91ceab762384, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689707688356.d96d660e8471c8146aea91ceab762384.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 19:14:49,803 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 19:14:49,803 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689707689803"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:49,806 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 19:14:49,809 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 19:14:49,821 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 66 msec 2023-07-18 19:14:49,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-18 19:14:49,866 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-18 19:14:49,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:49,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:49,871 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36387] ipc.CallRunner(144): callId: 165 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:55112 deadline: 1689707749870, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44751 startCode=1689707683024. As of locationSeqNum=6. 2023-07-18 19:14:49,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:49,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:49,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:49,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:49,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:49,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561] to rsgroup default 2023-07-18 19:14:49,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:49,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:49,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:49,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:49,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1065181980, current retry=0 2023-07-18 19:14:49,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120] are moved back to Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:49,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1065181980 => default 2023-07-18 19:14:49,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:50,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1065181980 2023-07-18 19:14:50,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 19:14:50,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:50,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:50,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:50,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:50,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:50,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:50,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:50,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:50,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:50,023 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:50,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:50,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:50,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:50,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:50,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:50,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708890040, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:50,041 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:50,043 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:50,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,045 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:50,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:50,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:50,075 INFO [Listener at localhost/40787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=509 (was 423) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1144594579_17 at /127.0.0.1:35562 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_447768547_17 at /127.0.0.1:42356 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1144594579_17 at /127.0.0.1:46240 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44751 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62147@0x3270d993-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:44751Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7b3db8b3-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1648033731-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:35500 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648033731-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648033731-639-acceptor-0@551961eb-ServerConnector@3e629c81{HTTP/1.1, (http/1.1)}{0.0.0.0:36151} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_447768547_17 at /127.0.0.1:42228 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62147@0x3270d993-SendThread(127.0.0.1:62147) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:42182 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:44967 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648033731-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648033731-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:35484 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44751-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648033731-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:44967 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3-prefix:jenkins-hbase4.apache.org,44751,1689707683024 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648033731-638 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1144594579_17 at /127.0.0.1:42204 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:46184 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1144594579_17 at /127.0.0.1:35532 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62147@0x3270d993 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3-prefix:jenkins-hbase4.apache.org,41417,1689707679207.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648033731-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=829 (was 685) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=386 (was 394), ProcessCount=173 (was 173), AvailableMemoryMB=3375 (was 3941) 2023-07-18 19:14:50,076 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-18 19:14:50,093 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=509, OpenFileDescriptor=829, MaxFileDescriptor=60000, SystemLoadAverage=386, ProcessCount=173, AvailableMemoryMB=3375 2023-07-18 19:14:50,093 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-18 19:14:50,093 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-18 19:14:50,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:50,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:50,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:50,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:50,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:50,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:50,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:50,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:50,111 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:50,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:50,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:50,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:50,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:50,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:50,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708890126, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:50,127 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:50,129 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:50,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,130 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:50,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:50,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:50,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-18 19:14:50,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:50,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:36768 deadline: 1689708890132, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 19:14:50,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-18 19:14:50,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:50,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:36768 deadline: 1689708890134, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 19:14:50,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-18 19:14:50,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:50,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:36768 deadline: 1689708890135, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 19:14:50,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-18 19:14:50,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-18 19:14:50,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:50,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:50,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:50,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:50,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:50,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:50,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:50,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-18 19:14:50,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 19:14:50,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:50,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:50,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:50,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:50,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:50,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:50,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:50,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:50,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:50,187 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:50,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:50,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:50,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:50,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:50,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:50,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708890203, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:50,204 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:50,206 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:50,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,207 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:50,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:50,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:50,226 INFO [Listener at localhost/40787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=512 (was 509) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=829 (was 829), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=386 (was 386), ProcessCount=173 (was 173), AvailableMemoryMB=3387 (was 3375) - AvailableMemoryMB LEAK? - 2023-07-18 19:14:50,226 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-18 19:14:50,246 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=512, OpenFileDescriptor=829, MaxFileDescriptor=60000, SystemLoadAverage=386, ProcessCount=173, AvailableMemoryMB=3386 2023-07-18 19:14:50,246 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-18 19:14:50,246 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-18 19:14:50,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:50,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:50,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:50,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:50,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:50,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:50,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:50,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:50,268 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:50,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:50,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:50,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:50,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:50,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:50,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708890280, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:50,281 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:50,283 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:50,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,284 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:50,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:50,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:50,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:50,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:50,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-18 19:14:50,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 19:14:50,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:50,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:50,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:50,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:50,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:39561] to rsgroup bar 2023-07-18 19:14:50,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:50,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 19:14:50,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:50,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:50,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-18 19:14:50,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 19:14:50,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 19:14:50,325 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 19:14:50,326 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41417,1689707679207, state=CLOSING 2023-07-18 19:14:50,328 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:14:50,328 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:14:50,328 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:14:50,483 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 19:14:50,484 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 19:14:50,485 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 19:14:50,485 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 19:14:50,485 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 19:14:50,485 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 19:14:50,485 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=39.91 KB heapSize=61.45 KB 2023-07-18 19:14:50,516 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.02 KB at sequenceid=104 (bloomFilter=false), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/info/7fdfcbf5cd0e4cfd9d783455ee56e515 2023-07-18 19:14:50,524 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7fdfcbf5cd0e4cfd9d783455ee56e515 2023-07-18 19:14:50,541 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=104 (bloomFilter=false), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/rep_barrier/c0d86d23aae54a22a56ef164d211ce7b 2023-07-18 19:14:50,547 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c0d86d23aae54a22a56ef164d211ce7b 2023-07-18 19:14:50,564 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=104 (bloomFilter=false), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/table/d872c3c6a5e04eb99f17d8b8c58cd037 2023-07-18 19:14:50,571 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d872c3c6a5e04eb99f17d8b8c58cd037 2023-07-18 19:14:50,573 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/info/7fdfcbf5cd0e4cfd9d783455ee56e515 as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/7fdfcbf5cd0e4cfd9d783455ee56e515 2023-07-18 19:14:50,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7fdfcbf5cd0e4cfd9d783455ee56e515 2023-07-18 19:14:50,582 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/7fdfcbf5cd0e4cfd9d783455ee56e515, entries=40, sequenceid=104, filesize=9.4 K 2023-07-18 19:14:50,584 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/rep_barrier/c0d86d23aae54a22a56ef164d211ce7b as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier/c0d86d23aae54a22a56ef164d211ce7b 2023-07-18 19:14:50,591 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c0d86d23aae54a22a56ef164d211ce7b 2023-07-18 19:14:50,592 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier/c0d86d23aae54a22a56ef164d211ce7b, entries=10, sequenceid=104, filesize=6.1 K 2023-07-18 19:14:50,593 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/table/d872c3c6a5e04eb99f17d8b8c58cd037 as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/d872c3c6a5e04eb99f17d8b8c58cd037 2023-07-18 19:14:50,601 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d872c3c6a5e04eb99f17d8b8c58cd037 2023-07-18 19:14:50,602 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/d872c3c6a5e04eb99f17d8b8c58cd037, entries=11, sequenceid=104, filesize=6.0 K 2023-07-18 19:14:50,604 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~39.91 KB/40867, heapSize ~61.40 KB/62872, currentSize=0 B/0 for 1588230740 in 119ms, sequenceid=104, compaction requested=false 2023-07-18 19:14:50,617 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/recovered.edits/107.seqid, newMaxSeqId=107, maxSeqId=19 2023-07-18 19:14:50,617 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:14:50,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 19:14:50,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 19:14:50,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44751,1689707683024 record at close sequenceid=104 2023-07-18 19:14:50,620 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 19:14:50,620 WARN [PEWorker-1] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 19:14:50,622 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-18 19:14:50,622 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41417,1689707679207 in 292 msec 2023-07-18 19:14:50,623 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:14:50,773 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44751,1689707683024, state=OPENING 2023-07-18 19:14:50,775 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:14:50,775 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:50,775 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:14:50,933 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 19:14:50,933 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:14:50,935 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44751%2C1689707683024.meta, suffix=.meta, logDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,44751,1689707683024, archiveDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs, maxLogs=32 2023-07-18 19:14:50,954 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK] 2023-07-18 19:14:50,957 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK] 2023-07-18 19:14:50,959 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK] 2023-07-18 19:14:50,962 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,44751,1689707683024/jenkins-hbase4.apache.org%2C44751%2C1689707683024.meta.1689707690936.meta 2023-07-18 19:14:50,962 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46877,DS-ca4a6244-a1d0-4141-9e63-a51dd88baded,DISK], DatanodeInfoWithStorage[127.0.0.1:42397,DS-2103302f-84d7-4ff9-aaf8-b2138d78776d,DISK], DatanodeInfoWithStorage[127.0.0.1:33839,DS-ca485556-ee09-4bc3-9270-847b7b30f4d3,DISK]] 2023-07-18 19:14:50,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:50,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:14:50,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 19:14:50,963 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 19:14:50,964 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 19:14:50,964 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:50,964 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 19:14:50,964 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 19:14:50,965 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 19:14:50,967 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info 2023-07-18 19:14:50,967 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info 2023-07-18 19:14:50,967 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 19:14:50,976 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7fdfcbf5cd0e4cfd9d783455ee56e515 2023-07-18 19:14:50,976 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/7fdfcbf5cd0e4cfd9d783455ee56e515 2023-07-18 19:14:50,983 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/b301ff35b43c4ca3acd7df2d5e3cbb87 2023-07-18 19:14:50,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:50,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 19:14:50,984 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:14:50,984 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:14:50,984 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 19:14:50,992 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c0d86d23aae54a22a56ef164d211ce7b 2023-07-18 19:14:50,992 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier/c0d86d23aae54a22a56ef164d211ce7b 2023-07-18 19:14:50,993 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:50,993 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 19:14:50,994 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table 2023-07-18 19:14:50,994 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table 2023-07-18 19:14:50,995 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 19:14:51,003 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/901dcf925d0b4367b19d43b1cc4dfffb 2023-07-18 19:14:51,013 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d872c3c6a5e04eb99f17d8b8c58cd037 2023-07-18 19:14:51,013 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/d872c3c6a5e04eb99f17d8b8c58cd037 2023-07-18 19:14:51,013 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:51,014 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740 2023-07-18 19:14:51,016 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740 2023-07-18 19:14:51,018 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 19:14:51,020 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 19:14:51,021 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=108; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11591206880, jitterRate=0.07951526343822479}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 19:14:51,021 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 19:14:51,022 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=83, masterSystemTime=1689707690928 2023-07-18 19:14:51,024 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 19:14:51,024 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 19:14:51,024 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44751,1689707683024, state=OPEN 2023-07-18 19:14:51,026 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:14:51,026 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:14:51,028 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-18 19:14:51,028 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44751,1689707683024 in 251 msec 2023-07-18 19:14:51,029 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 705 msec 2023-07-18 19:14:51,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-18 19:14:51,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120, jenkins-hbase4.apache.org,41417,1689707679207] are moved back to default 2023-07-18 19:14:51,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-18 19:14:51,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:51,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:51,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:51,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 19:14:51,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:51,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:51,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-18 19:14:51,340 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:14:51,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-18 19:14:51,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-18 19:14:51,343 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:51,344 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 19:14:51,344 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:51,345 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:51,355 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:14:51,357 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41417] ipc.CallRunner(144): callId: 194 service: ClientService methodName: Get size: 142 connection: 172.31.14.131:55204 deadline: 1689707751356, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44751 startCode=1689707683024. As of locationSeqNum=104. 2023-07-18 19:14:51,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-18 19:14:51,459 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,460 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 empty. 2023-07-18 19:14:51,460 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,460 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 19:14:51,481 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:51,482 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 793b564db9f6c3e94a3ee13d86ee23a5, NAME => 'Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:51,504 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:51,504 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 793b564db9f6c3e94a3ee13d86ee23a5, disabling compactions & flushes 2023-07-18 19:14:51,504 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:51,504 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:51,504 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. after waiting 0 ms 2023-07-18 19:14:51,504 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:51,504 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:51,504 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 793b564db9f6c3e94a3ee13d86ee23a5: 2023-07-18 19:14:51,507 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:14:51,508 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707691508"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707691508"}]},"ts":"1689707691508"} 2023-07-18 19:14:51,510 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:14:51,511 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:14:51,511 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707691511"}]},"ts":"1689707691511"} 2023-07-18 19:14:51,512 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-18 19:14:51,516 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, ASSIGN}] 2023-07-18 19:14:51,519 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, ASSIGN 2023-07-18 19:14:51,520 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:14:51,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-18 19:14:51,672 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:51,672 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707691672"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707691672"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707691672"}]},"ts":"1689707691672"} 2023-07-18 19:14:51,674 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:51,832 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:51,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 793b564db9f6c3e94a3ee13d86ee23a5, NAME => 'Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:51,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:51,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,834 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,836 DEBUG [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/f 2023-07-18 19:14:51,836 DEBUG [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/f 2023-07-18 19:14:51,836 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 793b564db9f6c3e94a3ee13d86ee23a5 columnFamilyName f 2023-07-18 19:14:51,837 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] regionserver.HStore(310): Store=793b564db9f6c3e94a3ee13d86ee23a5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:51,838 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:51,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:51,846 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 793b564db9f6c3e94a3ee13d86ee23a5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10750738880, jitterRate=0.0012405812740325928}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:51,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 793b564db9f6c3e94a3ee13d86ee23a5: 2023-07-18 19:14:51,847 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5., pid=86, masterSystemTime=1689707691827 2023-07-18 19:14:51,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:51,849 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:51,850 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:51,850 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707691849"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707691849"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707691849"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707691849"}]},"ts":"1689707691849"} 2023-07-18 19:14:51,854 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-18 19:14:51,854 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,44751,1689707683024 in 177 msec 2023-07-18 19:14:51,861 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-18 19:14:51,861 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, ASSIGN in 338 msec 2023-07-18 19:14:51,864 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:14:51,864 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707691864"}]},"ts":"1689707691864"} 2023-07-18 19:14:51,867 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-18 19:14:51,870 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:14:51,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 533 msec 2023-07-18 19:14:51,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-18 19:14:51,947 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-18 19:14:51,947 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-18 19:14:51,947 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:51,948 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41417] ipc.CallRunner(144): callId: 276 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:55224 deadline: 1689707751948, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44751 startCode=1689707683024. As of locationSeqNum=104. 2023-07-18 19:14:51,970 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 19:14:52,050 DEBUG [hconnection-0x5f700c8a-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:14:52,058 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36406, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:14:52,071 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-18 19:14:52,071 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:52,071 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-18 19:14:52,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-18 19:14:52,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:52,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 19:14:52,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:52,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:52,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-18 19:14:52,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 793b564db9f6c3e94a3ee13d86ee23a5 to RSGroup bar 2023-07-18 19:14:52,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:52,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:52,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:52,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:52,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 19:14:52,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:52,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, REOPEN/MOVE 2023-07-18 19:14:52,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-18 19:14:52,083 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, REOPEN/MOVE 2023-07-18 19:14:52,083 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:52,084 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707692083"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707692083"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707692083"}]},"ts":"1689707692083"} 2023-07-18 19:14:52,088 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:52,254 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 793b564db9f6c3e94a3ee13d86ee23a5, disabling compactions & flushes 2023-07-18 19:14:52,257 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:52,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:52,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. after waiting 0 ms 2023-07-18 19:14:52,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:52,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:52,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:52,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 793b564db9f6c3e94a3ee13d86ee23a5: 2023-07-18 19:14:52,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 793b564db9f6c3e94a3ee13d86ee23a5 move to jenkins-hbase4.apache.org,36387,1689707679286 record at close sequenceid=2 2023-07-18 19:14:52,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,267 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=CLOSED 2023-07-18 19:14:52,267 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707692267"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707692267"}]},"ts":"1689707692267"} 2023-07-18 19:14:52,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-18 19:14:52,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,44751,1689707683024 in 183 msec 2023-07-18 19:14:52,271 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:52,421 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:14:52,422 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:52,422 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707692422"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707692422"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707692422"}]},"ts":"1689707692422"} 2023-07-18 19:14:52,424 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:52,580 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:52,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 793b564db9f6c3e94a3ee13d86ee23a5, NAME => 'Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:52,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:52,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,582 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,583 DEBUG [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/f 2023-07-18 19:14:52,584 DEBUG [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/f 2023-07-18 19:14:52,584 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 793b564db9f6c3e94a3ee13d86ee23a5 columnFamilyName f 2023-07-18 19:14:52,585 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] regionserver.HStore(310): Store=793b564db9f6c3e94a3ee13d86ee23a5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:52,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:52,591 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 793b564db9f6c3e94a3ee13d86ee23a5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10070839680, jitterRate=-0.06207996606826782}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:52,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 793b564db9f6c3e94a3ee13d86ee23a5: 2023-07-18 19:14:52,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5., pid=89, masterSystemTime=1689707692576 2023-07-18 19:14:52,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:52,593 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:52,594 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:52,594 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707692594"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707692594"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707692594"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707692594"}]},"ts":"1689707692594"} 2023-07-18 19:14:52,598 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-18 19:14:52,598 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,36387,1689707679286 in 172 msec 2023-07-18 19:14:52,600 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, REOPEN/MOVE in 517 msec 2023-07-18 19:14:53,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-18 19:14:53,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-18 19:14:53,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:53,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:53,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:53,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 19:14:53,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:53,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 19:14:53,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:53,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:36768 deadline: 1689708893091, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-18 19:14:53,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:39561] to rsgroup default 2023-07-18 19:14:53,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:53,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:36768 deadline: 1689708893093, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-18 19:14:53,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-18 19:14:53,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:53,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 19:14:53,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:53,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:53,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-18 19:14:53,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 793b564db9f6c3e94a3ee13d86ee23a5 to RSGroup default 2023-07-18 19:14:53,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, REOPEN/MOVE 2023-07-18 19:14:53,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 19:14:53,107 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, REOPEN/MOVE 2023-07-18 19:14:53,108 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:53,108 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707693108"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707693108"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707693108"}]},"ts":"1689707693108"} 2023-07-18 19:14:53,111 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:53,251 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-18 19:14:53,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 793b564db9f6c3e94a3ee13d86ee23a5, disabling compactions & flushes 2023-07-18 19:14:53,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:53,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:53,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. after waiting 0 ms 2023-07-18 19:14:53,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:53,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:14:53,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:53,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 793b564db9f6c3e94a3ee13d86ee23a5: 2023-07-18 19:14:53,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 793b564db9f6c3e94a3ee13d86ee23a5 move to jenkins-hbase4.apache.org,44751,1689707683024 record at close sequenceid=5 2023-07-18 19:14:53,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,279 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=CLOSED 2023-07-18 19:14:53,279 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707693278"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707693278"}]},"ts":"1689707693278"} 2023-07-18 19:14:53,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-18 19:14:53,285 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,36387,1689707679286 in 170 msec 2023-07-18 19:14:53,287 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:14:53,437 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:53,437 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707693437"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707693437"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707693437"}]},"ts":"1689707693437"} 2023-07-18 19:14:53,440 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:53,597 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:53,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 793b564db9f6c3e94a3ee13d86ee23a5, NAME => 'Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:53,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:53,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,599 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,600 DEBUG [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/f 2023-07-18 19:14:53,600 DEBUG [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/f 2023-07-18 19:14:53,600 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 793b564db9f6c3e94a3ee13d86ee23a5 columnFamilyName f 2023-07-18 19:14:53,601 INFO [StoreOpener-793b564db9f6c3e94a3ee13d86ee23a5-1] regionserver.HStore(310): Store=793b564db9f6c3e94a3ee13d86ee23a5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:53,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:53,606 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 793b564db9f6c3e94a3ee13d86ee23a5; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9993309920, jitterRate=-0.06930048763751984}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:53,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 793b564db9f6c3e94a3ee13d86ee23a5: 2023-07-18 19:14:53,607 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5., pid=92, masterSystemTime=1689707693593 2023-07-18 19:14:53,608 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:53,608 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:53,608 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:53,609 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707693608"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707693608"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707693608"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707693608"}]},"ts":"1689707693608"} 2023-07-18 19:14:53,611 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-18 19:14:53,611 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,44751,1689707683024 in 170 msec 2023-07-18 19:14:53,613 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, REOPEN/MOVE in 507 msec 2023-07-18 19:14:54,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-18 19:14:54,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-18 19:14:54,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:54,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 19:14:54,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:54,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:36768 deadline: 1689708894113, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-18 19:14:54,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:39561] to rsgroup default 2023-07-18 19:14:54,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 19:14:54,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:54,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:54,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-18 19:14:54,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120, jenkins-hbase4.apache.org,41417,1689707679207] are moved back to bar 2023-07-18 19:14:54,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-18 19:14:54,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:54,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 19:14:54,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:54,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 19:14:54,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:54,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,138 INFO [Listener at localhost/40787] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-18 19:14:54,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-18 19:14:54,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-18 19:14:54,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 19:14:54,142 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707694142"}]},"ts":"1689707694142"} 2023-07-18 19:14:54,143 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-18 19:14:54,145 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-18 19:14:54,146 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, UNASSIGN}] 2023-07-18 19:14:54,147 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, UNASSIGN 2023-07-18 19:14:54,148 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:54,148 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707694148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707694148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707694148"}]},"ts":"1689707694148"} 2023-07-18 19:14:54,149 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:54,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 19:14:54,301 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:54,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 793b564db9f6c3e94a3ee13d86ee23a5, disabling compactions & flushes 2023-07-18 19:14:54,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:54,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:54,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. after waiting 0 ms 2023-07-18 19:14:54,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:54,307 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 19:14:54,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5. 2023-07-18 19:14:54,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 793b564db9f6c3e94a3ee13d86ee23a5: 2023-07-18 19:14:54,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:54,310 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=793b564db9f6c3e94a3ee13d86ee23a5, regionState=CLOSED 2023-07-18 19:14:54,311 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689707694310"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707694310"}]},"ts":"1689707694310"} 2023-07-18 19:14:54,314 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-18 19:14:54,314 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure 793b564db9f6c3e94a3ee13d86ee23a5, server=jenkins-hbase4.apache.org,44751,1689707683024 in 163 msec 2023-07-18 19:14:54,315 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-18 19:14:54,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=793b564db9f6c3e94a3ee13d86ee23a5, UNASSIGN in 168 msec 2023-07-18 19:14:54,316 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707694316"}]},"ts":"1689707694316"} 2023-07-18 19:14:54,318 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-18 19:14:54,320 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-18 19:14:54,322 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 182 msec 2023-07-18 19:14:54,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 19:14:54,445 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-18 19:14:54,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-18 19:14:54,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 19:14:54,450 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 19:14:54,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-18 19:14:54,451 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 19:14:54,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,456 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:54,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:54,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:54,458 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/recovered.edits] 2023-07-18 19:14:54,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-18 19:14:54,467 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/recovered.edits/10.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5/recovered.edits/10.seqid 2023-07-18 19:14:54,468 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testFailRemoveGroup/793b564db9f6c3e94a3ee13d86ee23a5 2023-07-18 19:14:54,469 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 19:14:54,473 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 19:14:54,485 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-18 19:14:54,487 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-18 19:14:54,489 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 19:14:54,489 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-18 19:14:54,489 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707694489"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:54,492 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 19:14:54,492 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 793b564db9f6c3e94a3ee13d86ee23a5, NAME => 'Group_testFailRemoveGroup,,1689707691336.793b564db9f6c3e94a3ee13d86ee23a5.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 19:14:54,492 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-18 19:14:54,492 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689707694492"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:54,495 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-18 19:14:54,498 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 19:14:54,499 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 52 msec 2023-07-18 19:14:54,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-18 19:14:54,564 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-18 19:14:54,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:54,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:54,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:54,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:54,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:54,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:54,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:54,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:54,583 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:54,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:54,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:54,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:54,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:54,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:54,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:54,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708894607, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:54,607 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:54,609 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:54,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,611 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:54,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:54,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:54,632 INFO [Listener at localhost/40787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=531 (was 512) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_447768547_17 at /127.0.0.1:47252 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3-prefix:jenkins-hbase4.apache.org,44751,1689707683024.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:42356 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5f700c8a-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_447768547_17 at /127.0.0.1:42228 [Waiting for operation #13] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:60354 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:60400 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_447768547_17 at /127.0.0.1:60314 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_447768547_17 at /127.0.0.1:60366 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_447768547_17 at /127.0.0.1:47274 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_447768547_17 at /127.0.0.1:47264 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:47272 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:51190 [Receiving block BP-302341202-172.31.14.131-1689707673371:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-302341202-172.31.14.131-1689707673371:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=848 (was 829) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=379 (was 386), ProcessCount=173 (was 173), AvailableMemoryMB=3128 (was 3386) 2023-07-18 19:14:54,633 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=531 is superior to 500 2023-07-18 19:14:54,650 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=531, OpenFileDescriptor=848, MaxFileDescriptor=60000, SystemLoadAverage=379, ProcessCount=173, AvailableMemoryMB=3128 2023-07-18 19:14:54,650 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=531 is superior to 500 2023-07-18 19:14:54,651 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-18 19:14:54,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:54,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:54,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:54,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:54,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:54,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:54,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:54,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:54,668 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:54,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:54,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:54,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:54,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:54,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:54,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:54,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708894679, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:54,680 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:54,685 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:54,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,686 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:54,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:54,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:54,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:54,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:54,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1774423512 2023-07-18 19:14:54,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1774423512 2023-07-18 19:14:54,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:54,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:54,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:54,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387] to rsgroup Group_testMultiTableMove_1774423512 2023-07-18 19:14:54,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1774423512 2023-07-18 19:14:54,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:54,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:54,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 19:14:54,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286] are moved back to default 2023-07-18 19:14:54,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1774423512 2023-07-18 19:14:54,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:54,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:54,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:54,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1774423512 2023-07-18 19:14:54,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:54,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:54,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 19:14:54,977 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:14:54,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-18 19:14:54,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 19:14:54,979 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1774423512 2023-07-18 19:14:54,980 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:54,980 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:54,981 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:54,987 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:14:54,989 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:54,990 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b empty. 2023-07-18 19:14:54,990 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:54,990 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 19:14:55,018 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:55,019 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1340b8c23748fdc97f9a4859bad5e67b, NAME => 'GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:55,031 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:55,032 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 1340b8c23748fdc97f9a4859bad5e67b, disabling compactions & flushes 2023-07-18 19:14:55,032 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:55,032 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:55,032 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. after waiting 0 ms 2023-07-18 19:14:55,032 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:55,032 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:55,032 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 1340b8c23748fdc97f9a4859bad5e67b: 2023-07-18 19:14:55,035 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:14:55,036 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707695036"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707695036"}]},"ts":"1689707695036"} 2023-07-18 19:14:55,038 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:14:55,039 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:14:55,039 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707695039"}]},"ts":"1689707695039"} 2023-07-18 19:14:55,040 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-18 19:14:55,048 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:55,048 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:55,048 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:55,048 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:55,048 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:55,048 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, ASSIGN}] 2023-07-18 19:14:55,050 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, ASSIGN 2023-07-18 19:14:55,051 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39561,1689707679120; forceNewPlan=false, retain=false 2023-07-18 19:14:55,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 19:14:55,197 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 19:14:55,202 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:14:55,204 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=1340b8c23748fdc97f9a4859bad5e67b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:55,204 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707695204"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707695204"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707695204"}]},"ts":"1689707695204"} 2023-07-18 19:14:55,206 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 1340b8c23748fdc97f9a4859bad5e67b, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:55,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 19:14:55,376 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:55,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1340b8c23748fdc97f9a4859bad5e67b, NAME => 'GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:55,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:55,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:55,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:55,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:55,379 INFO [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:55,381 DEBUG [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/f 2023-07-18 19:14:55,381 DEBUG [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/f 2023-07-18 19:14:55,382 INFO [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1340b8c23748fdc97f9a4859bad5e67b columnFamilyName f 2023-07-18 19:14:55,383 INFO [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] regionserver.HStore(310): Store=1340b8c23748fdc97f9a4859bad5e67b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:55,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:55,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:55,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:55,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:55,408 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1340b8c23748fdc97f9a4859bad5e67b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9782922720, jitterRate=-0.088894322514534}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:55,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1340b8c23748fdc97f9a4859bad5e67b: 2023-07-18 19:14:55,410 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b., pid=99, masterSystemTime=1689707695358 2023-07-18 19:14:55,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:55,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:55,413 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=1340b8c23748fdc97f9a4859bad5e67b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:55,414 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707695413"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707695413"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707695413"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707695413"}]},"ts":"1689707695413"} 2023-07-18 19:14:55,418 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-18 19:14:55,418 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 1340b8c23748fdc97f9a4859bad5e67b, server=jenkins-hbase4.apache.org,39561,1689707679120 in 210 msec 2023-07-18 19:14:55,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-18 19:14:55,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, ASSIGN in 370 msec 2023-07-18 19:14:55,421 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:14:55,421 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707695421"}]},"ts":"1689707695421"} 2023-07-18 19:14:55,422 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-18 19:14:55,425 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:14:55,427 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 452 msec 2023-07-18 19:14:55,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 19:14:55,583 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-18 19:14:55,583 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-18 19:14:55,583 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:55,588 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-18 19:14:55,589 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:55,589 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-18 19:14:55,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:55,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 19:14:55,594 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:14:55,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-18 19:14:55,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-18 19:14:55,597 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1774423512 2023-07-18 19:14:55,598 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:55,600 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:55,602 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:55,605 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:14:55,607 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:55,607 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d empty. 2023-07-18 19:14:55,608 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:55,608 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 19:14:55,646 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:55,648 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6a8391d5192ad46d242a593ad2330e2d, NAME => 'GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:55,675 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:55,675 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 6a8391d5192ad46d242a593ad2330e2d, disabling compactions & flushes 2023-07-18 19:14:55,675 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:55,675 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:55,675 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. after waiting 0 ms 2023-07-18 19:14:55,675 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:55,675 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:55,675 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 6a8391d5192ad46d242a593ad2330e2d: 2023-07-18 19:14:55,678 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:14:55,680 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707695680"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707695680"}]},"ts":"1689707695680"} 2023-07-18 19:14:55,682 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:14:55,682 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:14:55,683 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707695683"}]},"ts":"1689707695683"} 2023-07-18 19:14:55,684 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-18 19:14:55,688 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:55,688 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:55,688 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:55,689 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:55,689 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:55,689 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, ASSIGN}] 2023-07-18 19:14:55,691 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, ASSIGN 2023-07-18 19:14:55,692 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39561,1689707679120; forceNewPlan=false, retain=false 2023-07-18 19:14:55,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-18 19:14:55,842 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:14:55,844 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=6a8391d5192ad46d242a593ad2330e2d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:55,844 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707695844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707695844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707695844"}]},"ts":"1689707695844"} 2023-07-18 19:14:55,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 6a8391d5192ad46d242a593ad2330e2d, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:55,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-18 19:14:56,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6a8391d5192ad46d242a593ad2330e2d, NAME => 'GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:56,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:56,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,003 INFO [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,005 DEBUG [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/f 2023-07-18 19:14:56,005 DEBUG [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/f 2023-07-18 19:14:56,005 INFO [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6a8391d5192ad46d242a593ad2330e2d columnFamilyName f 2023-07-18 19:14:56,006 INFO [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] regionserver.HStore(310): Store=6a8391d5192ad46d242a593ad2330e2d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:56,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:56,012 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6a8391d5192ad46d242a593ad2330e2d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11567467680, jitterRate=0.07730437815189362}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:56,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6a8391d5192ad46d242a593ad2330e2d: 2023-07-18 19:14:56,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d., pid=102, masterSystemTime=1689707695998 2023-07-18 19:14:56,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,015 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=6a8391d5192ad46d242a593ad2330e2d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:56,015 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696015"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707696015"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707696015"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707696015"}]},"ts":"1689707696015"} 2023-07-18 19:14:56,019 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-18 19:14:56,019 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 6a8391d5192ad46d242a593ad2330e2d, server=jenkins-hbase4.apache.org,39561,1689707679120 in 171 msec 2023-07-18 19:14:56,020 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-18 19:14:56,020 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, ASSIGN in 330 msec 2023-07-18 19:14:56,021 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:14:56,021 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707696021"}]},"ts":"1689707696021"} 2023-07-18 19:14:56,022 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-18 19:14:56,025 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:14:56,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 434 msec 2023-07-18 19:14:56,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-18 19:14:56,202 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-18 19:14:56,202 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-18 19:14:56,202 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:56,207 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-18 19:14:56,207 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:56,207 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-18 19:14:56,208 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:56,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 19:14:56,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:14:56,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 19:14:56,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:14:56,223 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1774423512 2023-07-18 19:14:56,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1774423512 2023-07-18 19:14:56,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1774423512 2023-07-18 19:14:56,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:56,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:56,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:56,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1774423512 2023-07-18 19:14:56,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 6a8391d5192ad46d242a593ad2330e2d to RSGroup Group_testMultiTableMove_1774423512 2023-07-18 19:14:56,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, REOPEN/MOVE 2023-07-18 19:14:56,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1774423512 2023-07-18 19:14:56,235 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, REOPEN/MOVE 2023-07-18 19:14:56,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 1340b8c23748fdc97f9a4859bad5e67b to RSGroup Group_testMultiTableMove_1774423512 2023-07-18 19:14:56,236 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=6a8391d5192ad46d242a593ad2330e2d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:56,236 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707696236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707696236"}]},"ts":"1689707696236"} 2023-07-18 19:14:56,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, REOPEN/MOVE 2023-07-18 19:14:56,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1774423512, current retry=0 2023-07-18 19:14:56,240 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, REOPEN/MOVE 2023-07-18 19:14:56,241 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure 6a8391d5192ad46d242a593ad2330e2d, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:56,241 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=1340b8c23748fdc97f9a4859bad5e67b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:14:56,241 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696241"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707696241"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707696241"}]},"ts":"1689707696241"} 2023-07-18 19:14:56,244 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 1340b8c23748fdc97f9a4859bad5e67b, server=jenkins-hbase4.apache.org,39561,1689707679120}] 2023-07-18 19:14:56,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6a8391d5192ad46d242a593ad2330e2d, disabling compactions & flushes 2023-07-18 19:14:56,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. after waiting 0 ms 2023-07-18 19:14:56,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:56,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6a8391d5192ad46d242a593ad2330e2d: 2023-07-18 19:14:56,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6a8391d5192ad46d242a593ad2330e2d move to jenkins-hbase4.apache.org,36387,1689707679286 record at close sequenceid=2 2023-07-18 19:14:56,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1340b8c23748fdc97f9a4859bad5e67b, disabling compactions & flushes 2023-07-18 19:14:56,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:56,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:56,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. after waiting 0 ms 2023-07-18 19:14:56,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:56,406 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=6a8391d5192ad46d242a593ad2330e2d, regionState=CLOSED 2023-07-18 19:14:56,406 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696406"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707696406"}]},"ts":"1689707696406"} 2023-07-18 19:14:56,412 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-18 19:14:56,412 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure 6a8391d5192ad46d242a593ad2330e2d, server=jenkins-hbase4.apache.org,39561,1689707679120 in 167 msec 2023-07-18 19:14:56,413 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:56,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:56,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:56,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1340b8c23748fdc97f9a4859bad5e67b: 2023-07-18 19:14:56,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1340b8c23748fdc97f9a4859bad5e67b move to jenkins-hbase4.apache.org,36387,1689707679286 record at close sequenceid=2 2023-07-18 19:14:56,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,419 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=1340b8c23748fdc97f9a4859bad5e67b, regionState=CLOSED 2023-07-18 19:14:56,419 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696419"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707696419"}]},"ts":"1689707696419"} 2023-07-18 19:14:56,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-18 19:14:56,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 1340b8c23748fdc97f9a4859bad5e67b, server=jenkins-hbase4.apache.org,39561,1689707679120 in 177 msec 2023-07-18 19:14:56,423 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:56,564 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=1340b8c23748fdc97f9a4859bad5e67b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:56,564 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=6a8391d5192ad46d242a593ad2330e2d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:56,564 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696564"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707696564"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707696564"}]},"ts":"1689707696564"} 2023-07-18 19:14:56,564 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696564"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707696564"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707696564"}]},"ts":"1689707696564"} 2023-07-18 19:14:56,567 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=104, state=RUNNABLE; OpenRegionProcedure 1340b8c23748fdc97f9a4859bad5e67b, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:56,567 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=103, state=RUNNABLE; OpenRegionProcedure 6a8391d5192ad46d242a593ad2330e2d, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:56,726 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6a8391d5192ad46d242a593ad2330e2d, NAME => 'GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:56,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:56,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,728 INFO [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,729 DEBUG [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/f 2023-07-18 19:14:56,729 DEBUG [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/f 2023-07-18 19:14:56,730 INFO [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6a8391d5192ad46d242a593ad2330e2d columnFamilyName f 2023-07-18 19:14:56,730 INFO [StoreOpener-6a8391d5192ad46d242a593ad2330e2d-1] regionserver.HStore(310): Store=6a8391d5192ad46d242a593ad2330e2d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:56,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:56,736 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6a8391d5192ad46d242a593ad2330e2d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9494530560, jitterRate=-0.1157529354095459}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:56,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6a8391d5192ad46d242a593ad2330e2d: 2023-07-18 19:14:56,737 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d., pid=108, masterSystemTime=1689707696722 2023-07-18 19:14:56,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,738 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:56,739 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:56,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1340b8c23748fdc97f9a4859bad5e67b, NAME => 'GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:56,739 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=6a8391d5192ad46d242a593ad2330e2d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:56,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:56,739 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696739"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707696739"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707696739"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707696739"}]},"ts":"1689707696739"} 2023-07-18 19:14:56,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,740 INFO [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,741 DEBUG [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/f 2023-07-18 19:14:56,741 DEBUG [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/f 2023-07-18 19:14:56,742 INFO [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1340b8c23748fdc97f9a4859bad5e67b columnFamilyName f 2023-07-18 19:14:56,742 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=103 2023-07-18 19:14:56,742 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=103, state=SUCCESS; OpenRegionProcedure 6a8391d5192ad46d242a593ad2330e2d, server=jenkins-hbase4.apache.org,36387,1689707679286 in 174 msec 2023-07-18 19:14:56,743 INFO [StoreOpener-1340b8c23748fdc97f9a4859bad5e67b-1] regionserver.HStore(310): Store=1340b8c23748fdc97f9a4859bad5e67b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:56,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:56,750 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1340b8c23748fdc97f9a4859bad5e67b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10875859360, jitterRate=0.012893334031105042}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:56,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1340b8c23748fdc97f9a4859bad5e67b: 2023-07-18 19:14:56,750 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b., pid=107, masterSystemTime=1689707696722 2023-07-18 19:14:56,751 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, REOPEN/MOVE in 510 msec 2023-07-18 19:14:56,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:56,753 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:56,753 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=1340b8c23748fdc97f9a4859bad5e67b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:56,753 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707696753"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707696753"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707696753"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707696753"}]},"ts":"1689707696753"} 2023-07-18 19:14:56,758 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=104 2023-07-18 19:14:56,758 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=104, state=SUCCESS; OpenRegionProcedure 1340b8c23748fdc97f9a4859bad5e67b, server=jenkins-hbase4.apache.org,36387,1689707679286 in 188 msec 2023-07-18 19:14:56,759 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, REOPEN/MOVE in 523 msec 2023-07-18 19:14:57,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-18 19:14:57,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1774423512. 2023-07-18 19:14:57,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:57,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:57,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:57,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 19:14:57,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:14:57,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 19:14:57,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:14:57,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:57,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:57,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1774423512 2023-07-18 19:14:57,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:57,254 INFO [Listener at localhost/40787] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-18 19:14:57,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-18 19:14:57,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 19:14:57,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 19:14:57,258 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707697258"}]},"ts":"1689707697258"} 2023-07-18 19:14:57,260 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-18 19:14:57,263 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-18 19:14:57,263 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, UNASSIGN}] 2023-07-18 19:14:57,265 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, UNASSIGN 2023-07-18 19:14:57,266 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=1340b8c23748fdc97f9a4859bad5e67b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:57,266 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707697266"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707697266"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707697266"}]},"ts":"1689707697266"} 2023-07-18 19:14:57,267 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 1340b8c23748fdc97f9a4859bad5e67b, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:57,335 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 19:14:57,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 19:14:57,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:57,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1340b8c23748fdc97f9a4859bad5e67b, disabling compactions & flushes 2023-07-18 19:14:57,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:57,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:57,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. after waiting 0 ms 2023-07-18 19:14:57,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:57,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:14:57,426 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b. 2023-07-18 19:14:57,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1340b8c23748fdc97f9a4859bad5e67b: 2023-07-18 19:14:57,428 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:57,428 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=1340b8c23748fdc97f9a4859bad5e67b, regionState=CLOSED 2023-07-18 19:14:57,429 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707697428"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707697428"}]},"ts":"1689707697428"} 2023-07-18 19:14:57,432 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-18 19:14:57,433 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 1340b8c23748fdc97f9a4859bad5e67b, server=jenkins-hbase4.apache.org,36387,1689707679286 in 164 msec 2023-07-18 19:14:57,435 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-18 19:14:57,435 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1340b8c23748fdc97f9a4859bad5e67b, UNASSIGN in 170 msec 2023-07-18 19:14:57,436 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707697436"}]},"ts":"1689707697436"} 2023-07-18 19:14:57,439 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-18 19:14:57,440 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-18 19:14:57,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 188 msec 2023-07-18 19:14:57,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 19:14:57,561 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-18 19:14:57,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-18 19:14:57,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 19:14:57,564 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 19:14:57,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1774423512' 2023-07-18 19:14:57,565 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 19:14:57,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1774423512 2023-07-18 19:14:57,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:57,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:57,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:57,569 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:57,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-18 19:14:57,571 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/recovered.edits] 2023-07-18 19:14:57,576 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/recovered.edits/7.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b/recovered.edits/7.seqid 2023-07-18 19:14:57,577 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveA/1340b8c23748fdc97f9a4859bad5e67b 2023-07-18 19:14:57,577 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 19:14:57,579 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 19:14:57,582 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-18 19:14:57,583 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-18 19:14:57,584 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 19:14:57,584 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-18 19:14:57,584 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707697584"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:57,586 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 19:14:57,586 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1340b8c23748fdc97f9a4859bad5e67b, NAME => 'GrouptestMultiTableMoveA,,1689707694972.1340b8c23748fdc97f9a4859bad5e67b.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 19:14:57,586 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-18 19:14:57,586 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689707697586"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:57,588 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-18 19:14:57,589 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 19:14:57,590 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 27 msec 2023-07-18 19:14:57,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-18 19:14:57,672 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-18 19:14:57,673 INFO [Listener at localhost/40787] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-18 19:14:57,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-18 19:14:57,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 19:14:57,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 19:14:57,678 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707697677"}]},"ts":"1689707697677"} 2023-07-18 19:14:57,679 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-18 19:14:57,686 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-18 19:14:57,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, UNASSIGN}] 2023-07-18 19:14:57,688 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, UNASSIGN 2023-07-18 19:14:57,689 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=6a8391d5192ad46d242a593ad2330e2d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:57,689 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707697689"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707697689"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707697689"}]},"ts":"1689707697689"} 2023-07-18 19:14:57,691 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 6a8391d5192ad46d242a593ad2330e2d, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:57,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 19:14:57,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:57,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6a8391d5192ad46d242a593ad2330e2d, disabling compactions & flushes 2023-07-18 19:14:57,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:57,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:57,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. after waiting 0 ms 2023-07-18 19:14:57,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:57,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:14:57,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d. 2023-07-18 19:14:57,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6a8391d5192ad46d242a593ad2330e2d: 2023-07-18 19:14:57,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:57,851 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=6a8391d5192ad46d242a593ad2330e2d, regionState=CLOSED 2023-07-18 19:14:57,851 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689707697851"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707697851"}]},"ts":"1689707697851"} 2023-07-18 19:14:57,854 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-18 19:14:57,854 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 6a8391d5192ad46d242a593ad2330e2d, server=jenkins-hbase4.apache.org,36387,1689707679286 in 161 msec 2023-07-18 19:14:57,856 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-18 19:14:57,856 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6a8391d5192ad46d242a593ad2330e2d, UNASSIGN in 168 msec 2023-07-18 19:14:57,856 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707697856"}]},"ts":"1689707697856"} 2023-07-18 19:14:57,858 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-18 19:14:57,860 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-18 19:14:57,862 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 188 msec 2023-07-18 19:14:57,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 19:14:57,979 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-18 19:14:57,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-18 19:14:57,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 19:14:57,985 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 19:14:57,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1774423512' 2023-07-18 19:14:57,986 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 19:14:57,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1774423512 2023-07-18 19:14:57,992 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:57,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:57,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:57,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:57,994 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/recovered.edits] 2023-07-18 19:14:57,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-18 19:14:58,002 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/recovered.edits/7.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d/recovered.edits/7.seqid 2023-07-18 19:14:58,003 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/GrouptestMultiTableMoveB/6a8391d5192ad46d242a593ad2330e2d 2023-07-18 19:14:58,003 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 19:14:58,007 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 19:14:58,011 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-18 19:14:58,012 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-18 19:14:58,013 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 19:14:58,013 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-18 19:14:58,014 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707698013"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:58,015 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 19:14:58,015 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6a8391d5192ad46d242a593ad2330e2d, NAME => 'GrouptestMultiTableMoveB,,1689707695590.6a8391d5192ad46d242a593ad2330e2d.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 19:14:58,015 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-18 19:14:58,015 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689707698015"}]},"ts":"9223372036854775807"} 2023-07-18 19:14:58,017 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-18 19:14:58,019 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 19:14:58,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 38 msec 2023-07-18 19:14:58,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-18 19:14:58,099 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-18 19:14:58,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:58,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:58,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:58,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387] to rsgroup default 2023-07-18 19:14:58,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1774423512 2023-07-18 19:14:58,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:58,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1774423512, current retry=0 2023-07-18 19:14:58,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286] are moved back to Group_testMultiTableMove_1774423512 2023-07-18 19:14:58,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1774423512 => default 2023-07-18 19:14:58,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1774423512 2023-07-18 19:14:58,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 19:14:58,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:58,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:58,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:58,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:58,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:58,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:58,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:58,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:58,128 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:58,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:58,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:58,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:58,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:58,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:58,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 509 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708898143, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:58,144 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:58,146 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:58,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,148 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:58,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:58,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,173 INFO [Listener at localhost/40787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=525 (was 531), OpenFileDescriptor=822 (was 848), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=365 (was 379), ProcessCount=173 (was 173), AvailableMemoryMB=3026 (was 3128) 2023-07-18 19:14:58,174 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-18 19:14:58,192 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=525, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=365, ProcessCount=173, AvailableMemoryMB=3026 2023-07-18 19:14:58,192 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-18 19:14:58,192 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-18 19:14:58,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:58,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:58,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:58,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:58,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:58,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:58,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:58,209 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:58,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:58,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:58,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:58,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:58,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:58,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] ipc.CallRunner(144): callId: 537 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708898231, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:58,232 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:58,242 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:58,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,244 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:58,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:58,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:58,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-18 19:14:58,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 19:14:58,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:58,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:58,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561] to rsgroup oldGroup 2023-07-18 19:14:58,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 19:14:58,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:58,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 19:14:58,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120] are moved back to default 2023-07-18 19:14:58,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-18 19:14:58,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 19:14:58,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 19:14:58,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:58,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-18 19:14:58,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 19:14:58,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 19:14:58,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:14:58,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:58,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41417] to rsgroup anotherRSGroup 2023-07-18 19:14:58,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 19:14:58,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 19:14:58,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:14:58,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 19:14:58,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41417,1689707679207] are moved back to default 2023-07-18 19:14:58,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-18 19:14:58,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 19:14:58,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,344 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 19:14:58,344 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,351 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-18 19:14:58,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:58,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:36768 deadline: 1689708898350, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-18 19:14:58,353 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-18 19:14:58,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:58,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:36768 deadline: 1689708898353, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-18 19:14:58,355 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-18 19:14:58,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:58,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:36768 deadline: 1689708898355, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-18 19:14:58,360 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-18 19:14:58,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:58,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:36768 deadline: 1689708898360, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-18 19:14:58,368 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,368 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,370 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:58,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:58,370 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:58,371 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41417] to rsgroup default 2023-07-18 19:14:58,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 19:14:58,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 19:14:58,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:14:58,387 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-18 19:14:58,387 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41417,1689707679207] are moved back to anotherRSGroup 2023-07-18 19:14:58,387 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-18 19:14:58,387 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,389 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-18 19:14:58,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 19:14:58,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 19:14:58,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:58,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:58,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:58,398 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:58,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561] to rsgroup default 2023-07-18 19:14:58,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 19:14:58,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:58,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-18 19:14:58,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120] are moved back to oldGroup 2023-07-18 19:14:58,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-18 19:14:58,404 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,405 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-18 19:14:58,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 19:14:58,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:58,415 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:58,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:58,416 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:58,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:58,417 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,421 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:58,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:58,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:58,438 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:58,439 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:58,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:58,446 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:58,449 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,450 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,453 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:58,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:58,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 613 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708898453, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:58,454 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:58,456 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:58,459 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,459 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,459 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:58,460 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:58,460 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,488 INFO [Listener at localhost/40787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=529 (was 525) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=822 (was 822), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=365 (was 365), ProcessCount=173 (was 173), AvailableMemoryMB=3013 (was 3026) 2023-07-18 19:14:58,488 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=529 is superior to 500 2023-07-18 19:14:58,512 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=529, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=365, ProcessCount=173, AvailableMemoryMB=3013 2023-07-18 19:14:58,512 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=529 is superior to 500 2023-07-18 19:14:58,512 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-18 19:14:58,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:14:58,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:14:58,518 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:14:58,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:14:58,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,520 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:14:58,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:14:58,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:14:58,529 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:14:58,530 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:14:58,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:14:58,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:58,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,544 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:14:58,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:14:58,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 641 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708898544, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:14:58,545 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:14:58,547 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:58,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,548 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:14:58,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:58,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:14:58,550 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-18 19:14:58,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 19:14:58,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:58,558 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:14:58,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,564 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561] to rsgroup oldgroup 2023-07-18 19:14:58,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 19:14:58,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:58,569 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 19:14:58,569 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120] are moved back to default 2023-07-18 19:14:58,569 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-18 19:14:58,569 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:14:58,573 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:14:58,573 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:14:58,576 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 19:14:58,576 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:14:58,578 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:14:58,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-18 19:14:58,582 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:14:58,582 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-18 19:14:58,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 19:14:58,584 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 19:14:58,584 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:58,585 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:58,586 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:58,588 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:14:58,589 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,590 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 empty. 2023-07-18 19:14:58,591 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,591 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-18 19:14:58,608 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-18 19:14:58,609 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => c29891d66bd2ca5aa0c94f69449e63a5, NAME => 'testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:14:58,622 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:58,623 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing c29891d66bd2ca5aa0c94f69449e63a5, disabling compactions & flushes 2023-07-18 19:14:58,623 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:58,623 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:58,623 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. after waiting 0 ms 2023-07-18 19:14:58,623 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:58,623 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:58,623 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for c29891d66bd2ca5aa0c94f69449e63a5: 2023-07-18 19:14:58,625 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:14:58,626 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707698626"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707698626"}]},"ts":"1689707698626"} 2023-07-18 19:14:58,627 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:14:58,628 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:14:58,628 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707698628"}]},"ts":"1689707698628"} 2023-07-18 19:14:58,629 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-18 19:14:58,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:58,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:58,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:58,633 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:58,633 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, ASSIGN}] 2023-07-18 19:14:58,635 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, ASSIGN 2023-07-18 19:14:58,636 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:14:58,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 19:14:58,786 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:14:58,788 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:58,788 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707698788"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707698788"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707698788"}]},"ts":"1689707698788"} 2023-07-18 19:14:58,789 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:58,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 19:14:58,944 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:58,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c29891d66bd2ca5aa0c94f69449e63a5, NAME => 'testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:58,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:58,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,946 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,948 DEBUG [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/tr 2023-07-18 19:14:58,948 DEBUG [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/tr 2023-07-18 19:14:58,948 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c29891d66bd2ca5aa0c94f69449e63a5 columnFamilyName tr 2023-07-18 19:14:58,948 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] regionserver.HStore(310): Store=c29891d66bd2ca5aa0c94f69449e63a5/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:58,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:58,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:14:58,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c29891d66bd2ca5aa0c94f69449e63a5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9604674080, jitterRate=-0.10549502074718475}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:58,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c29891d66bd2ca5aa0c94f69449e63a5: 2023-07-18 19:14:58,956 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5., pid=119, masterSystemTime=1689707698941 2023-07-18 19:14:58,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:58,958 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:58,959 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:58,959 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707698959"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707698959"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707698959"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707698959"}]},"ts":"1689707698959"} 2023-07-18 19:14:58,962 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-18 19:14:58,962 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,44751,1689707683024 in 171 msec 2023-07-18 19:14:58,963 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-18 19:14:58,963 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, ASSIGN in 329 msec 2023-07-18 19:14:58,964 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:14:58,964 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707698964"}]},"ts":"1689707698964"} 2023-07-18 19:14:58,965 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-18 19:14:58,967 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:14:58,968 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 388 msec 2023-07-18 19:14:59,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 19:14:59,187 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-18 19:14:59,187 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-18 19:14:59,187 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:59,190 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-18 19:14:59,190 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:14:59,191 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-18 19:14:59,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-18 19:14:59,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 19:14:59,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:14:59,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:14:59,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:14:59,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-18 19:14:59,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region c29891d66bd2ca5aa0c94f69449e63a5 to RSGroup oldgroup 2023-07-18 19:14:59,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:14:59,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:14:59,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:14:59,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:14:59,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:14:59,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, REOPEN/MOVE 2023-07-18 19:14:59,198 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-18 19:14:59,198 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, REOPEN/MOVE 2023-07-18 19:14:59,199 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:14:59,199 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707699198"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707699198"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707699198"}]},"ts":"1689707699198"} 2023-07-18 19:14:59,200 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:14:59,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c29891d66bd2ca5aa0c94f69449e63a5, disabling compactions & flushes 2023-07-18 19:14:59,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:59,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:59,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. after waiting 0 ms 2023-07-18 19:14:59,354 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:59,358 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:14:59,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:59,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c29891d66bd2ca5aa0c94f69449e63a5: 2023-07-18 19:14:59,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c29891d66bd2ca5aa0c94f69449e63a5 move to jenkins-hbase4.apache.org,36387,1689707679286 record at close sequenceid=2 2023-07-18 19:14:59,360 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,361 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=CLOSED 2023-07-18 19:14:59,361 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707699361"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707699361"}]},"ts":"1689707699361"} 2023-07-18 19:14:59,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-18 19:14:59,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,44751,1689707683024 in 162 msec 2023-07-18 19:14:59,364 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36387,1689707679286; forceNewPlan=false, retain=false 2023-07-18 19:14:59,515 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:14:59,515 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:59,515 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707699515"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707699515"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707699515"}]},"ts":"1689707699515"} 2023-07-18 19:14:59,517 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:14:59,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:59,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c29891d66bd2ca5aa0c94f69449e63a5, NAME => 'testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:14:59,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:14:59,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,682 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,683 DEBUG [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/tr 2023-07-18 19:14:59,683 DEBUG [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/tr 2023-07-18 19:14:59,684 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c29891d66bd2ca5aa0c94f69449e63a5 columnFamilyName tr 2023-07-18 19:14:59,686 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] regionserver.HStore(310): Store=c29891d66bd2ca5aa0c94f69449e63a5/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:14:59,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:14:59,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c29891d66bd2ca5aa0c94f69449e63a5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10898278080, jitterRate=0.014981240034103394}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:14:59,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c29891d66bd2ca5aa0c94f69449e63a5: 2023-07-18 19:14:59,697 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5., pid=122, masterSystemTime=1689707699668 2023-07-18 19:14:59,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:59,699 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:14:59,700 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:14:59,701 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707699700"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707699700"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707699700"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707699700"}]},"ts":"1689707699700"} 2023-07-18 19:14:59,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-18 19:14:59,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,36387,1689707679286 in 185 msec 2023-07-18 19:14:59,707 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, REOPEN/MOVE in 508 msec 2023-07-18 19:15:00,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-18 19:15:00,198 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-18 19:15:00,198 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:00,201 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:00,202 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:00,205 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:00,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 19:15:00,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:00,207 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 19:15:00,207 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:00,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 19:15:00,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:00,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:00,209 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:00,210 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-18 19:15:00,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 19:15:00,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 19:15:00,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:00,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:00,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:15:00,223 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:00,227 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:00,227 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:00,230 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41417] to rsgroup normal 2023-07-18 19:15:00,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 19:15:00,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 19:15:00,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:00,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:00,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:15:00,236 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 19:15:00,236 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41417,1689707679207] are moved back to default 2023-07-18 19:15:00,236 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-18 19:15:00,236 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:00,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:00,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:00,244 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 19:15:00,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:00,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:00,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-18 19:15:00,250 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:00,250 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-18 19:15:00,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-18 19:15:00,252 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 19:15:00,253 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 19:15:00,253 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:00,254 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:00,254 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:15:00,256 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:00,258 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,259 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 empty. 2023-07-18 19:15:00,259 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,259 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-18 19:15:00,282 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:00,283 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9d7c43afbc30b4fb381514d1ccc4d668, NAME => 'unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:15:00,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:00,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 9d7c43afbc30b4fb381514d1ccc4d668, disabling compactions & flushes 2023-07-18 19:15:00,300 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:00,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:00,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. after waiting 0 ms 2023-07-18 19:15:00,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:00,300 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:00,300 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 9d7c43afbc30b4fb381514d1ccc4d668: 2023-07-18 19:15:00,303 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:00,304 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707700303"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707700303"}]},"ts":"1689707700303"} 2023-07-18 19:15:00,305 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:15:00,305 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:00,305 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707700305"}]},"ts":"1689707700305"} 2023-07-18 19:15:00,306 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-18 19:15:00,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, ASSIGN}] 2023-07-18 19:15:00,312 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, ASSIGN 2023-07-18 19:15:00,313 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:15:00,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-18 19:15:00,465 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:00,465 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707700465"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707700465"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707700465"}]},"ts":"1689707700465"} 2023-07-18 19:15:00,467 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:00,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-18 19:15:00,622 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:00,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d7c43afbc30b4fb381514d1ccc4d668, NAME => 'unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:00,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:00,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,624 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,626 DEBUG [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/ut 2023-07-18 19:15:00,626 DEBUG [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/ut 2023-07-18 19:15:00,627 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d7c43afbc30b4fb381514d1ccc4d668 columnFamilyName ut 2023-07-18 19:15:00,627 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] regionserver.HStore(310): Store=9d7c43afbc30b4fb381514d1ccc4d668/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:00,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:00,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:00,634 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d7c43afbc30b4fb381514d1ccc4d668; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11491503840, jitterRate=0.07022969424724579}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:00,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d7c43afbc30b4fb381514d1ccc4d668: 2023-07-18 19:15:00,634 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668., pid=125, masterSystemTime=1689707700618 2023-07-18 19:15:00,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:00,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:00,636 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:00,636 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707700636"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707700636"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707700636"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707700636"}]},"ts":"1689707700636"} 2023-07-18 19:15:00,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-18 19:15:00,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,44751,1689707683024 in 170 msec 2023-07-18 19:15:00,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-18 19:15:00,641 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, ASSIGN in 328 msec 2023-07-18 19:15:00,641 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:00,641 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707700641"}]},"ts":"1689707700641"} 2023-07-18 19:15:00,642 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-18 19:15:00,645 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:00,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 398 msec 2023-07-18 19:15:00,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-18 19:15:00,855 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-18 19:15:00,855 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-18 19:15:00,855 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:00,858 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-18 19:15:00,858 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:00,858 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-18 19:15:00,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-18 19:15:00,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 19:15:00,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 19:15:00,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:00,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:00,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:15:00,865 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-18 19:15:00,865 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 9d7c43afbc30b4fb381514d1ccc4d668 to RSGroup normal 2023-07-18 19:15:00,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, REOPEN/MOVE 2023-07-18 19:15:00,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-18 19:15:00,866 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, REOPEN/MOVE 2023-07-18 19:15:00,866 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:00,866 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707700866"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707700866"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707700866"}]},"ts":"1689707700866"} 2023-07-18 19:15:00,868 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:01,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d7c43afbc30b4fb381514d1ccc4d668, disabling compactions & flushes 2023-07-18 19:15:01,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:01,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:01,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. after waiting 0 ms 2023-07-18 19:15:01,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:01,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:01,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:01,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d7c43afbc30b4fb381514d1ccc4d668: 2023-07-18 19:15:01,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9d7c43afbc30b4fb381514d1ccc4d668 move to jenkins-hbase4.apache.org,41417,1689707679207 record at close sequenceid=2 2023-07-18 19:15:01,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,028 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=CLOSED 2023-07-18 19:15:01,028 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707701028"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707701028"}]},"ts":"1689707701028"} 2023-07-18 19:15:01,030 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-18 19:15:01,030 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,44751,1689707683024 in 162 msec 2023-07-18 19:15:01,031 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41417,1689707679207; forceNewPlan=false, retain=false 2023-07-18 19:15:01,181 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:01,182 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707701181"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707701181"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707701181"}]},"ts":"1689707701181"} 2023-07-18 19:15:01,183 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:15:01,344 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:01,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d7c43afbc30b4fb381514d1ccc4d668, NAME => 'unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:01,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:01,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,346 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,347 DEBUG [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/ut 2023-07-18 19:15:01,347 DEBUG [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/ut 2023-07-18 19:15:01,348 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d7c43afbc30b4fb381514d1ccc4d668 columnFamilyName ut 2023-07-18 19:15:01,348 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] regionserver.HStore(310): Store=9d7c43afbc30b4fb381514d1ccc4d668/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:01,349 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:01,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d7c43afbc30b4fb381514d1ccc4d668; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9435582080, jitterRate=-0.1212429404258728}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:01,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d7c43afbc30b4fb381514d1ccc4d668: 2023-07-18 19:15:01,356 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668., pid=128, masterSystemTime=1689707701335 2023-07-18 19:15:01,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:01,358 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:01,358 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:01,358 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707701358"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707701358"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707701358"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707701358"}]},"ts":"1689707701358"} 2023-07-18 19:15:01,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-18 19:15:01,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,41417,1689707679207 in 177 msec 2023-07-18 19:15:01,362 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, REOPEN/MOVE in 496 msec 2023-07-18 19:15:01,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-18 19:15:01,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-18 19:15:01,866 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:01,869 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:01,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:01,872 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:01,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 19:15:01,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:01,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 19:15:01,874 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:01,874 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 19:15:01,874 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:01,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-18 19:15:01,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 19:15:01,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:01,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:01,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 19:15:01,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-18 19:15:01,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-18 19:15:01,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:01,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:01,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-18 19:15:01,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:01,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 19:15:01,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:01,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 19:15:01,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:01,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:01,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:01,895 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-18 19:15:01,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 19:15:01,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:01,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:01,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 19:15:01,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:15:01,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-18 19:15:01,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region 9d7c43afbc30b4fb381514d1ccc4d668 to RSGroup default 2023-07-18 19:15:01,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, REOPEN/MOVE 2023-07-18 19:15:01,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 19:15:01,906 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, REOPEN/MOVE 2023-07-18 19:15:01,907 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:01,907 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707701907"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707701907"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707701907"}]},"ts":"1689707701907"} 2023-07-18 19:15:01,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:15:02,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d7c43afbc30b4fb381514d1ccc4d668, disabling compactions & flushes 2023-07-18 19:15:02,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:02,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:02,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. after waiting 0 ms 2023-07-18 19:15:02,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:02,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:15:02,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:02,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d7c43afbc30b4fb381514d1ccc4d668: 2023-07-18 19:15:02,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9d7c43afbc30b4fb381514d1ccc4d668 move to jenkins-hbase4.apache.org,44751,1689707683024 record at close sequenceid=5 2023-07-18 19:15:02,071 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,072 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=CLOSED 2023-07-18 19:15:02,072 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707702072"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707702072"}]},"ts":"1689707702072"} 2023-07-18 19:15:02,075 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-18 19:15:02,075 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,41417,1689707679207 in 164 msec 2023-07-18 19:15:02,076 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:15:02,226 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:02,227 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707702226"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707702226"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707702226"}]},"ts":"1689707702226"} 2023-07-18 19:15:02,228 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:02,241 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 19:15:02,384 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:02,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9d7c43afbc30b4fb381514d1ccc4d668, NAME => 'unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:02,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:02,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,387 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,388 DEBUG [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/ut 2023-07-18 19:15:02,388 DEBUG [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/ut 2023-07-18 19:15:02,388 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9d7c43afbc30b4fb381514d1ccc4d668 columnFamilyName ut 2023-07-18 19:15:02,389 INFO [StoreOpener-9d7c43afbc30b4fb381514d1ccc4d668-1] regionserver.HStore(310): Store=9d7c43afbc30b4fb381514d1ccc4d668/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:02,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,391 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:02,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9d7c43afbc30b4fb381514d1ccc4d668; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10618815840, jitterRate=-0.011045709252357483}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:02,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9d7c43afbc30b4fb381514d1ccc4d668: 2023-07-18 19:15:02,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668., pid=131, masterSystemTime=1689707702380 2023-07-18 19:15:02,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:02,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:02,398 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=9d7c43afbc30b4fb381514d1ccc4d668, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:02,398 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689707702398"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707702398"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707702398"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707702398"}]},"ts":"1689707702398"} 2023-07-18 19:15:02,401 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-18 19:15:02,401 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 9d7c43afbc30b4fb381514d1ccc4d668, server=jenkins-hbase4.apache.org,44751,1689707683024 in 172 msec 2023-07-18 19:15:02,402 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=9d7c43afbc30b4fb381514d1ccc4d668, REOPEN/MOVE in 496 msec 2023-07-18 19:15:02,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-18 19:15:02,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-18 19:15:02,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:02,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41417] to rsgroup default 2023-07-18 19:15:02,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 19:15:02,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:02,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:02,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 19:15:02,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:15:02,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-18 19:15:02,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41417,1689707679207] are moved back to normal 2023-07-18 19:15:02,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-18 19:15:02,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:02,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-18 19:15:02,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:02,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:02,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 19:15:02,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 19:15:02,920 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:02,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:02,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:02,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:02,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:02,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:02,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:02,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:02,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 19:15:02,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 19:15:02,929 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:02,931 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-18 19:15:02,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:02,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 19:15:02,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:02,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-18 19:15:02,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(345): Moving region c29891d66bd2ca5aa0c94f69449e63a5 to RSGroup default 2023-07-18 19:15:02,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, REOPEN/MOVE 2023-07-18 19:15:02,935 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 19:15:02,935 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, REOPEN/MOVE 2023-07-18 19:15:02,935 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:15:02,936 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707702935"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707702935"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707702935"}]},"ts":"1689707702935"} 2023-07-18 19:15:02,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,36387,1689707679286}] 2023-07-18 19:15:03,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c29891d66bd2ca5aa0c94f69449e63a5, disabling compactions & flushes 2023-07-18 19:15:03,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:03,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:03,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. after waiting 0 ms 2023-07-18 19:15:03,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:03,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 19:15:03,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:03,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c29891d66bd2ca5aa0c94f69449e63a5: 2023-07-18 19:15:03,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c29891d66bd2ca5aa0c94f69449e63a5 move to jenkins-hbase4.apache.org,41417,1689707679207 record at close sequenceid=5 2023-07-18 19:15:03,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,099 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=CLOSED 2023-07-18 19:15:03,099 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707703099"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707703099"}]},"ts":"1689707703099"} 2023-07-18 19:15:03,102 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-18 19:15:03,102 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,36387,1689707679286 in 163 msec 2023-07-18 19:15:03,102 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41417,1689707679207; forceNewPlan=false, retain=false 2023-07-18 19:15:03,253 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:15:03,253 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:03,253 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707703253"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707703253"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707703253"}]},"ts":"1689707703253"} 2023-07-18 19:15:03,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:15:03,409 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:03,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c29891d66bd2ca5aa0c94f69449e63a5, NAME => 'testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:03,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:03,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,411 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,412 DEBUG [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/tr 2023-07-18 19:15:03,412 DEBUG [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/tr 2023-07-18 19:15:03,413 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c29891d66bd2ca5aa0c94f69449e63a5 columnFamilyName tr 2023-07-18 19:15:03,413 INFO [StoreOpener-c29891d66bd2ca5aa0c94f69449e63a5-1] regionserver.HStore(310): Store=c29891d66bd2ca5aa0c94f69449e63a5/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:03,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:03,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c29891d66bd2ca5aa0c94f69449e63a5; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9443915680, jitterRate=-0.12046681344509125}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:03,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c29891d66bd2ca5aa0c94f69449e63a5: 2023-07-18 19:15:03,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5., pid=134, masterSystemTime=1689707703406 2023-07-18 19:15:03,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:03,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:03,421 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=c29891d66bd2ca5aa0c94f69449e63a5, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:03,421 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689707703421"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707703421"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707703421"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707703421"}]},"ts":"1689707703421"} 2023-07-18 19:15:03,424 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-18 19:15:03,424 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure c29891d66bd2ca5aa0c94f69449e63a5, server=jenkins-hbase4.apache.org,41417,1689707679207 in 167 msec 2023-07-18 19:15:03,425 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=c29891d66bd2ca5aa0c94f69449e63a5, REOPEN/MOVE in 490 msec 2023-07-18 19:15:03,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-18 19:15:03,935 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-18 19:15:03,935 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:03,937 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561] to rsgroup default 2023-07-18 19:15:03,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:03,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 19:15:03,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:03,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-18 19:15:03,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120] are moved back to newgroup 2023-07-18 19:15:03,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-18 19:15:03,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:03,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-18 19:15:03,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:03,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:03,952 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:03,955 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:03,956 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:03,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:03,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:03,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:03,963 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:03,967 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:03,967 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:03,969 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:15:03,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:03,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 761 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708903969, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:15:03,970 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:03,972 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:03,973 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:03,973 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:03,973 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:03,974 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:03,974 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:03,997 INFO [Listener at localhost/40787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=520 (was 529), OpenFileDescriptor=803 (was 822), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=336 (was 365), ProcessCount=173 (was 173), AvailableMemoryMB=2880 (was 3013) 2023-07-18 19:15:03,997 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-18 19:15:04,024 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=520, OpenFileDescriptor=803, MaxFileDescriptor=60000, SystemLoadAverage=336, ProcessCount=173, AvailableMemoryMB=2879 2023-07-18 19:15:04,024 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-18 19:15:04,025 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-18 19:15:04,032 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,032 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,033 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:04,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:04,033 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:04,034 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:04,034 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:04,035 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:04,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:04,041 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:04,047 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:04,047 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:04,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:04,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:04,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:04,058 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,058 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,061 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:15:04,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:04,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 789 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708904061, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:15:04,062 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:04,063 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:04,064 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,064 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,065 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:04,066 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:04,066 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:04,066 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-18 19:15:04,067 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:04,073 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-18 19:15:04,074 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-18 19:15:04,075 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-18 19:15:04,075 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:04,076 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-18 19:15:04,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:04,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 801 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:36768 deadline: 1689708904075, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-18 19:15:04,078 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-18 19:15:04,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:04,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:36768 deadline: 1689708904078, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 19:15:04,081 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-18 19:15:04,081 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-18 19:15:04,087 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-18 19:15:04,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:04,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:36768 deadline: 1689708904086, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 19:15:04,091 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,091 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,092 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:04,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:04,093 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:04,094 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:04,094 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:04,095 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:04,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:04,100 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:04,102 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:04,103 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:04,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:04,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:04,108 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:04,112 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,112 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,114 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:15:04,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:04,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 832 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708904114, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:15:04,117 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:04,119 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:04,119 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,119 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,120 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:04,120 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:04,120 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:04,139 INFO [Listener at localhost/40787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=524 (was 520) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd8a14a-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=803 (was 803), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=336 (was 336), ProcessCount=173 (was 173), AvailableMemoryMB=2876 (was 2879) 2023-07-18 19:15:04,139 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=524 is superior to 500 2023-07-18 19:15:04,158 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=524, OpenFileDescriptor=803, MaxFileDescriptor=60000, SystemLoadAverage=336, ProcessCount=173, AvailableMemoryMB=2875 2023-07-18 19:15:04,158 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=524 is superior to 500 2023-07-18 19:15:04,158 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-18 19:15:04,164 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,164 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,165 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:04,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:04,165 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:04,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:04,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:04,168 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:04,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:04,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:04,177 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:04,177 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:04,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:04,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:04,186 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:04,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,191 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:15:04,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:04,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 860 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708904191, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:15:04,192 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:04,194 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:04,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,195 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:04,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:04,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:04,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:04,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:04,198 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_2042780629 2023-07-18 19:15:04,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2042780629 2023-07-18 19:15:04,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:04,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:15:04,203 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:04,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561] to rsgroup Group_testDisabledTableMove_2042780629 2023-07-18 19:15:04,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2042780629 2023-07-18 19:15:04,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:04,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:15:04,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 19:15:04,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120] are moved back to default 2023-07-18 19:15:04,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_2042780629 2023-07-18 19:15:04,215 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:04,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:04,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:04,220 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_2042780629 2023-07-18 19:15:04,220 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:04,222 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:04,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-18 19:15:04,225 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:04,225 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-18 19:15:04,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 19:15:04,227 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:04,227 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2042780629 2023-07-18 19:15:04,227 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:04,228 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:15:04,230 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:04,235 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,236 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,236 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,235 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,236 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,236 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18 empty. 2023-07-18 19:15:04,237 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d empty. 2023-07-18 19:15:04,237 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5 empty. 2023-07-18 19:15:04,237 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1 empty. 2023-07-18 19:15:04,237 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001 empty. 2023-07-18 19:15:04,237 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,238 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,238 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,238 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,238 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,238 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 19:15:04,250 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:04,252 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => eae77a063b55ab99c2cc64b694025001, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:15:04,252 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 792e287eae4d3ed13fc19cf3418fab18, NAME => 'Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:15:04,252 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8070ea769d786df5f3f108b3f07e479d, NAME => 'Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:15:04,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,267 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 8070ea769d786df5f3f108b3f07e479d, disabling compactions & flushes 2023-07-18 19:15:04,268 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:04,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:04,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. after waiting 0 ms 2023-07-18 19:15:04,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:04,268 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:04,268 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 8070ea769d786df5f3f108b3f07e479d: 2023-07-18 19:15:04,268 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5d0f2f4733f3e79e1cd236e7ae156ff5, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:15:04,270 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,270 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing eae77a063b55ab99c2cc64b694025001, disabling compactions & flushes 2023-07-18 19:15:04,270 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:04,270 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:04,270 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. after waiting 0 ms 2023-07-18 19:15:04,270 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:04,270 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:04,270 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for eae77a063b55ab99c2cc64b694025001: 2023-07-18 19:15:04,271 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => f7ba8ee9a68ff4c641d3a1737e0f33b1, NAME => 'Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp 2023-07-18 19:15:04,279 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,280 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 792e287eae4d3ed13fc19cf3418fab18, disabling compactions & flushes 2023-07-18 19:15:04,280 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:04,280 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:04,280 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. after waiting 0 ms 2023-07-18 19:15:04,280 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:04,280 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:04,280 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 792e287eae4d3ed13fc19cf3418fab18: 2023-07-18 19:15:04,303 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,303 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing f7ba8ee9a68ff4c641d3a1737e0f33b1, disabling compactions & flushes 2023-07-18 19:15:04,303 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:04,303 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:04,303 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. after waiting 0 ms 2023-07-18 19:15:04,303 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:04,303 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:04,303 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for f7ba8ee9a68ff4c641d3a1737e0f33b1: 2023-07-18 19:15:04,307 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,307 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 5d0f2f4733f3e79e1cd236e7ae156ff5, disabling compactions & flushes 2023-07-18 19:15:04,307 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:04,307 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:04,307 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. after waiting 0 ms 2023-07-18 19:15:04,307 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:04,307 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:04,307 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 5d0f2f4733f3e79e1cd236e7ae156ff5: 2023-07-18 19:15:04,310 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:04,311 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704311"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707704311"}]},"ts":"1689707704311"} 2023-07-18 19:15:04,311 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704311"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707704311"}]},"ts":"1689707704311"} 2023-07-18 19:15:04,311 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707704311"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707704311"}]},"ts":"1689707704311"} 2023-07-18 19:15:04,311 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707704311"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707704311"}]},"ts":"1689707704311"} 2023-07-18 19:15:04,311 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704311"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707704311"}]},"ts":"1689707704311"} 2023-07-18 19:15:04,314 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 19:15:04,314 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:04,315 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707704315"}]},"ts":"1689707704315"} 2023-07-18 19:15:04,316 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-18 19:15:04,319 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:04,320 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:04,320 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:04,320 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:04,320 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=792e287eae4d3ed13fc19cf3418fab18, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8070ea769d786df5f3f108b3f07e479d, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eae77a063b55ab99c2cc64b694025001, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5d0f2f4733f3e79e1cd236e7ae156ff5, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f7ba8ee9a68ff4c641d3a1737e0f33b1, ASSIGN}] 2023-07-18 19:15:04,323 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f7ba8ee9a68ff4c641d3a1737e0f33b1, ASSIGN 2023-07-18 19:15:04,323 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8070ea769d786df5f3f108b3f07e479d, ASSIGN 2023-07-18 19:15:04,323 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5d0f2f4733f3e79e1cd236e7ae156ff5, ASSIGN 2023-07-18 19:15:04,323 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eae77a063b55ab99c2cc64b694025001, ASSIGN 2023-07-18 19:15:04,324 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=792e287eae4d3ed13fc19cf3418fab18, ASSIGN 2023-07-18 19:15:04,324 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f7ba8ee9a68ff4c641d3a1737e0f33b1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41417,1689707679207; forceNewPlan=false, retain=false 2023-07-18 19:15:04,324 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8070ea769d786df5f3f108b3f07e479d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:15:04,324 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5d0f2f4733f3e79e1cd236e7ae156ff5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:15:04,324 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eae77a063b55ab99c2cc64b694025001, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41417,1689707679207; forceNewPlan=false, retain=false 2023-07-18 19:15:04,325 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=792e287eae4d3ed13fc19cf3418fab18, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44751,1689707683024; forceNewPlan=false, retain=false 2023-07-18 19:15:04,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 19:15:04,474 INFO [jenkins-hbase4:43617] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 19:15:04,478 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=f7ba8ee9a68ff4c641d3a1737e0f33b1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:04,478 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=792e287eae4d3ed13fc19cf3418fab18, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,478 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707704478"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704478"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704478"}]},"ts":"1689707704478"} 2023-07-18 19:15:04,478 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=eae77a063b55ab99c2cc64b694025001, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:04,478 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=5d0f2f4733f3e79e1cd236e7ae156ff5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,478 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704478"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704478"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704478"}]},"ts":"1689707704478"} 2023-07-18 19:15:04,478 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704478"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704478"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704478"}]},"ts":"1689707704478"} 2023-07-18 19:15:04,478 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=8070ea769d786df5f3f108b3f07e479d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,478 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707704478"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704478"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704478"}]},"ts":"1689707704478"} 2023-07-18 19:15:04,478 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704478"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704478"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704478"}]},"ts":"1689707704478"} 2023-07-18 19:15:04,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE; OpenRegionProcedure f7ba8ee9a68ff4c641d3a1737e0f33b1, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:15:04,480 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=138, state=RUNNABLE; OpenRegionProcedure eae77a063b55ab99c2cc64b694025001, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:15:04,482 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=139, state=RUNNABLE; OpenRegionProcedure 5d0f2f4733f3e79e1cd236e7ae156ff5, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:04,483 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=136, state=RUNNABLE; OpenRegionProcedure 792e287eae4d3ed13fc19cf3418fab18, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:04,486 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=137, state=RUNNABLE; OpenRegionProcedure 8070ea769d786df5f3f108b3f07e479d, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:04,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 19:15:04,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:04,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f7ba8ee9a68ff4c641d3a1737e0f33b1, NAME => 'Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 19:15:04,636 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:04,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 792e287eae4d3ed13fc19cf3418fab18, NAME => 'Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 19:15:04,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,641 INFO [StoreOpener-792e287eae4d3ed13fc19cf3418fab18-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,641 INFO [StoreOpener-f7ba8ee9a68ff4c641d3a1737e0f33b1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,643 DEBUG [StoreOpener-f7ba8ee9a68ff4c641d3a1737e0f33b1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1/f 2023-07-18 19:15:04,643 DEBUG [StoreOpener-f7ba8ee9a68ff4c641d3a1737e0f33b1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1/f 2023-07-18 19:15:04,643 DEBUG [StoreOpener-792e287eae4d3ed13fc19cf3418fab18-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18/f 2023-07-18 19:15:04,643 DEBUG [StoreOpener-792e287eae4d3ed13fc19cf3418fab18-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18/f 2023-07-18 19:15:04,643 INFO [StoreOpener-f7ba8ee9a68ff4c641d3a1737e0f33b1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f7ba8ee9a68ff4c641d3a1737e0f33b1 columnFamilyName f 2023-07-18 19:15:04,644 INFO [StoreOpener-792e287eae4d3ed13fc19cf3418fab18-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 792e287eae4d3ed13fc19cf3418fab18 columnFamilyName f 2023-07-18 19:15:04,644 INFO [StoreOpener-f7ba8ee9a68ff4c641d3a1737e0f33b1-1] regionserver.HStore(310): Store=f7ba8ee9a68ff4c641d3a1737e0f33b1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:04,644 INFO [StoreOpener-792e287eae4d3ed13fc19cf3418fab18-1] regionserver.HStore(310): Store=792e287eae4d3ed13fc19cf3418fab18/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:04,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,645 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:04,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:04,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:04,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:04,653 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 792e287eae4d3ed13fc19cf3418fab18; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11530370880, jitterRate=0.07384946942329407}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:04,653 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f7ba8ee9a68ff4c641d3a1737e0f33b1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11860590720, jitterRate=0.1046035885810852}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:04,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f7ba8ee9a68ff4c641d3a1737e0f33b1: 2023-07-18 19:15:04,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 792e287eae4d3ed13fc19cf3418fab18: 2023-07-18 19:15:04,654 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1., pid=141, masterSystemTime=1689707704631 2023-07-18 19:15:04,654 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18., pid=144, masterSystemTime=1689707704633 2023-07-18 19:15:04,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:04,656 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:04,656 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:04,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eae77a063b55ab99c2cc64b694025001, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 19:15:04,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,656 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=f7ba8ee9a68ff4c641d3a1737e0f33b1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:04,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,656 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707704656"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707704656"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707704656"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707704656"}]},"ts":"1689707704656"} 2023-07-18 19:15:04,656 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:04,657 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:04,657 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:04,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8070ea769d786df5f3f108b3f07e479d, NAME => 'Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 19:15:04,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,658 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=792e287eae4d3ed13fc19cf3418fab18, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,658 INFO [StoreOpener-eae77a063b55ab99c2cc64b694025001-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,658 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707704658"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707704658"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707704658"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707704658"}]},"ts":"1689707704658"} 2023-07-18 19:15:04,660 INFO [StoreOpener-8070ea769d786df5f3f108b3f07e479d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,662 DEBUG [StoreOpener-8070ea769d786df5f3f108b3f07e479d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d/f 2023-07-18 19:15:04,662 DEBUG [StoreOpener-8070ea769d786df5f3f108b3f07e479d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d/f 2023-07-18 19:15:04,663 DEBUG [StoreOpener-eae77a063b55ab99c2cc64b694025001-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001/f 2023-07-18 19:15:04,663 DEBUG [StoreOpener-eae77a063b55ab99c2cc64b694025001-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001/f 2023-07-18 19:15:04,664 INFO [StoreOpener-eae77a063b55ab99c2cc64b694025001-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eae77a063b55ab99c2cc64b694025001 columnFamilyName f 2023-07-18 19:15:04,664 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=140 2023-07-18 19:15:04,664 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; OpenRegionProcedure f7ba8ee9a68ff4c641d3a1737e0f33b1, server=jenkins-hbase4.apache.org,41417,1689707679207 in 180 msec 2023-07-18 19:15:04,664 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=136 2023-07-18 19:15:04,664 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=136, state=SUCCESS; OpenRegionProcedure 792e287eae4d3ed13fc19cf3418fab18, server=jenkins-hbase4.apache.org,44751,1689707683024 in 179 msec 2023-07-18 19:15:04,665 INFO [StoreOpener-eae77a063b55ab99c2cc64b694025001-1] regionserver.HStore(310): Store=eae77a063b55ab99c2cc64b694025001/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:04,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f7ba8ee9a68ff4c641d3a1737e0f33b1, ASSIGN in 344 msec 2023-07-18 19:15:04,666 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=792e287eae4d3ed13fc19cf3418fab18, ASSIGN in 344 msec 2023-07-18 19:15:04,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,666 INFO [StoreOpener-8070ea769d786df5f3f108b3f07e479d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8070ea769d786df5f3f108b3f07e479d columnFamilyName f 2023-07-18 19:15:04,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,667 INFO [StoreOpener-8070ea769d786df5f3f108b3f07e479d-1] regionserver.HStore(310): Store=8070ea769d786df5f3f108b3f07e479d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:04,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:04,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:04,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:04,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:04,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eae77a063b55ab99c2cc64b694025001; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11783559360, jitterRate=0.09742948412895203}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:04,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eae77a063b55ab99c2cc64b694025001: 2023-07-18 19:15:04,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8070ea769d786df5f3f108b3f07e479d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9512513920, jitterRate=-0.1140781044960022}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:04,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8070ea769d786df5f3f108b3f07e479d: 2023-07-18 19:15:04,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001., pid=142, masterSystemTime=1689707704631 2023-07-18 19:15:04,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d., pid=145, masterSystemTime=1689707704633 2023-07-18 19:15:04,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:04,678 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=eae77a063b55ab99c2cc64b694025001, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:04,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:04,678 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704678"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707704678"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707704678"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707704678"}]},"ts":"1689707704678"} 2023-07-18 19:15:04,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:04,679 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:04,679 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:04,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d0f2f4733f3e79e1cd236e7ae156ff5, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 19:15:04,680 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=8070ea769d786df5f3f108b3f07e479d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,680 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704679"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707704679"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707704679"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707704679"}]},"ts":"1689707704679"} 2023-07-18 19:15:04,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:04,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,682 INFO [StoreOpener-5d0f2f4733f3e79e1cd236e7ae156ff5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,682 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=138 2023-07-18 19:15:04,682 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=138, state=SUCCESS; OpenRegionProcedure eae77a063b55ab99c2cc64b694025001, server=jenkins-hbase4.apache.org,41417,1689707679207 in 200 msec 2023-07-18 19:15:04,684 DEBUG [StoreOpener-5d0f2f4733f3e79e1cd236e7ae156ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5/f 2023-07-18 19:15:04,684 DEBUG [StoreOpener-5d0f2f4733f3e79e1cd236e7ae156ff5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5/f 2023-07-18 19:15:04,684 INFO [StoreOpener-5d0f2f4733f3e79e1cd236e7ae156ff5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d0f2f4733f3e79e1cd236e7ae156ff5 columnFamilyName f 2023-07-18 19:15:04,684 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=137 2023-07-18 19:15:04,685 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=137, state=SUCCESS; OpenRegionProcedure 8070ea769d786df5f3f108b3f07e479d, server=jenkins-hbase4.apache.org,44751,1689707683024 in 197 msec 2023-07-18 19:15:04,685 INFO [StoreOpener-5d0f2f4733f3e79e1cd236e7ae156ff5-1] regionserver.HStore(310): Store=5d0f2f4733f3e79e1cd236e7ae156ff5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:04,686 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eae77a063b55ab99c2cc64b694025001, ASSIGN in 362 msec 2023-07-18 19:15:04,686 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8070ea769d786df5f3f108b3f07e479d, ASSIGN in 365 msec 2023-07-18 19:15:04,686 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:04,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:04,693 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5d0f2f4733f3e79e1cd236e7ae156ff5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11419752640, jitterRate=0.06354734301567078}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:04,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5d0f2f4733f3e79e1cd236e7ae156ff5: 2023-07-18 19:15:04,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5., pid=143, masterSystemTime=1689707704633 2023-07-18 19:15:04,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:04,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:04,697 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=5d0f2f4733f3e79e1cd236e7ae156ff5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,697 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704696"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707704696"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707704696"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707704696"}]},"ts":"1689707704696"} 2023-07-18 19:15:04,700 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=139 2023-07-18 19:15:04,700 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=139, state=SUCCESS; OpenRegionProcedure 5d0f2f4733f3e79e1cd236e7ae156ff5, server=jenkins-hbase4.apache.org,44751,1689707683024 in 216 msec 2023-07-18 19:15:04,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=135 2023-07-18 19:15:04,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5d0f2f4733f3e79e1cd236e7ae156ff5, ASSIGN in 380 msec 2023-07-18 19:15:04,702 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:04,702 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707704702"}]},"ts":"1689707704702"} 2023-07-18 19:15:04,703 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-18 19:15:04,705 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:04,707 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 484 msec 2023-07-18 19:15:04,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 19:15:04,834 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-18 19:15:04,834 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-18 19:15:04,835 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:04,840 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-18 19:15:04,840 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:04,840 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-18 19:15:04,841 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:04,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 19:15:04,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:04,850 INFO [Listener at localhost/40787] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 19:15:04,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 19:15:04,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-18 19:15:04,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-18 19:15:04,857 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707704857"}]},"ts":"1689707704857"} 2023-07-18 19:15:04,859 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-18 19:15:04,861 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-18 19:15:04,867 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=792e287eae4d3ed13fc19cf3418fab18, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8070ea769d786df5f3f108b3f07e479d, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eae77a063b55ab99c2cc64b694025001, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5d0f2f4733f3e79e1cd236e7ae156ff5, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f7ba8ee9a68ff4c641d3a1737e0f33b1, UNASSIGN}] 2023-07-18 19:15:04,871 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=792e287eae4d3ed13fc19cf3418fab18, UNASSIGN 2023-07-18 19:15:04,871 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8070ea769d786df5f3f108b3f07e479d, UNASSIGN 2023-07-18 19:15:04,872 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eae77a063b55ab99c2cc64b694025001, UNASSIGN 2023-07-18 19:15:04,872 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f7ba8ee9a68ff4c641d3a1737e0f33b1, UNASSIGN 2023-07-18 19:15:04,872 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5d0f2f4733f3e79e1cd236e7ae156ff5, UNASSIGN 2023-07-18 19:15:04,873 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=792e287eae4d3ed13fc19cf3418fab18, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,873 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=5d0f2f4733f3e79e1cd236e7ae156ff5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,873 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707704873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704873"}]},"ts":"1689707704873"} 2023-07-18 19:15:04,873 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704873"}]},"ts":"1689707704873"} 2023-07-18 19:15:04,873 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=f7ba8ee9a68ff4c641d3a1737e0f33b1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:04,874 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=8070ea769d786df5f3f108b3f07e479d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:04,874 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707704873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704873"}]},"ts":"1689707704873"} 2023-07-18 19:15:04,874 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704874"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704874"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704874"}]},"ts":"1689707704874"} 2023-07-18 19:15:04,874 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=eae77a063b55ab99c2cc64b694025001, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:04,874 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707704873"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707704873"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707704873"}]},"ts":"1689707704873"} 2023-07-18 19:15:04,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure 792e287eae4d3ed13fc19cf3418fab18, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:04,876 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=150, state=RUNNABLE; CloseRegionProcedure 5d0f2f4733f3e79e1cd236e7ae156ff5, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:04,877 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=151, state=RUNNABLE; CloseRegionProcedure f7ba8ee9a68ff4c641d3a1737e0f33b1, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:15:04,878 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=148, state=RUNNABLE; CloseRegionProcedure 8070ea769d786df5f3f108b3f07e479d, server=jenkins-hbase4.apache.org,44751,1689707683024}] 2023-07-18 19:15:04,879 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=149, state=RUNNABLE; CloseRegionProcedure eae77a063b55ab99c2cc64b694025001, server=jenkins-hbase4.apache.org,41417,1689707679207}] 2023-07-18 19:15:04,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-18 19:15:05,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:05,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5d0f2f4733f3e79e1cd236e7ae156ff5, disabling compactions & flushes 2023-07-18 19:15:05,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:05,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:05,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. after waiting 0 ms 2023-07-18 19:15:05,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:05,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:05,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eae77a063b55ab99c2cc64b694025001, disabling compactions & flushes 2023-07-18 19:15:05,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:05,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:05,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. after waiting 0 ms 2023-07-18 19:15:05,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:05,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:05,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5. 2023-07-18 19:15:05,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5d0f2f4733f3e79e1cd236e7ae156ff5: 2023-07-18 19:15:05,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:05,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001. 2023-07-18 19:15:05,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eae77a063b55ab99c2cc64b694025001: 2023-07-18 19:15:05,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:05,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:05,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8070ea769d786df5f3f108b3f07e479d, disabling compactions & flushes 2023-07-18 19:15:05,038 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:05,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:05,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. after waiting 0 ms 2023-07-18 19:15:05,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:05,039 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=5d0f2f4733f3e79e1cd236e7ae156ff5, regionState=CLOSED 2023-07-18 19:15:05,039 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707705039"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707705039"}]},"ts":"1689707705039"} 2023-07-18 19:15:05,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:05,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:05,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:05,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f7ba8ee9a68ff4c641d3a1737e0f33b1, disabling compactions & flushes 2023-07-18 19:15:05,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d. 2023-07-18 19:15:05,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:05,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8070ea769d786df5f3f108b3f07e479d: 2023-07-18 19:15:05,049 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=eae77a063b55ab99c2cc64b694025001, regionState=CLOSED 2023-07-18 19:15:05,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:05,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. after waiting 0 ms 2023-07-18 19:15:05,049 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707705049"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707705049"}]},"ts":"1689707705049"} 2023-07-18 19:15:05,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:05,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:05,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:05,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 792e287eae4d3ed13fc19cf3418fab18, disabling compactions & flushes 2023-07-18 19:15:05,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:05,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=150 2023-07-18 19:15:05,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:05,053 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=150, state=SUCCESS; CloseRegionProcedure 5d0f2f4733f3e79e1cd236e7ae156ff5, server=jenkins-hbase4.apache.org,44751,1689707683024 in 171 msec 2023-07-18 19:15:05,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. after waiting 0 ms 2023-07-18 19:15:05,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:05,053 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=8070ea769d786df5f3f108b3f07e479d, regionState=CLOSED 2023-07-18 19:15:05,053 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689707705053"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707705053"}]},"ts":"1689707705053"} 2023-07-18 19:15:05,054 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5d0f2f4733f3e79e1cd236e7ae156ff5, UNASSIGN in 186 msec 2023-07-18 19:15:05,055 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=149 2023-07-18 19:15:05,055 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=149, state=SUCCESS; CloseRegionProcedure eae77a063b55ab99c2cc64b694025001, server=jenkins-hbase4.apache.org,41417,1689707679207 in 172 msec 2023-07-18 19:15:05,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:05,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1. 2023-07-18 19:15:05,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f7ba8ee9a68ff4c641d3a1737e0f33b1: 2023-07-18 19:15:05,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:05,060 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=eae77a063b55ab99c2cc64b694025001, UNASSIGN in 188 msec 2023-07-18 19:15:05,060 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=f7ba8ee9a68ff4c641d3a1737e0f33b1, regionState=CLOSED 2023-07-18 19:15:05,060 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707705060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707705060"}]},"ts":"1689707705060"} 2023-07-18 19:15:05,061 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=148 2023-07-18 19:15:05,061 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=148, state=SUCCESS; CloseRegionProcedure 8070ea769d786df5f3f108b3f07e479d, server=jenkins-hbase4.apache.org,44751,1689707683024 in 177 msec 2023-07-18 19:15:05,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8070ea769d786df5f3f108b3f07e479d, UNASSIGN in 194 msec 2023-07-18 19:15:05,064 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=151 2023-07-18 19:15:05,064 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=151, state=SUCCESS; CloseRegionProcedure f7ba8ee9a68ff4c641d3a1737e0f33b1, server=jenkins-hbase4.apache.org,41417,1689707679207 in 185 msec 2023-07-18 19:15:05,065 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f7ba8ee9a68ff4c641d3a1737e0f33b1, UNASSIGN in 197 msec 2023-07-18 19:15:05,071 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:05,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18. 2023-07-18 19:15:05,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 792e287eae4d3ed13fc19cf3418fab18: 2023-07-18 19:15:05,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:05,075 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=792e287eae4d3ed13fc19cf3418fab18, regionState=CLOSED 2023-07-18 19:15:05,075 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689707705075"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707705075"}]},"ts":"1689707705075"} 2023-07-18 19:15:05,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=147 2023-07-18 19:15:05,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=147, state=SUCCESS; CloseRegionProcedure 792e287eae4d3ed13fc19cf3418fab18, server=jenkins-hbase4.apache.org,44751,1689707683024 in 204 msec 2023-07-18 19:15:05,082 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=146 2023-07-18 19:15:05,082 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=792e287eae4d3ed13fc19cf3418fab18, UNASSIGN in 213 msec 2023-07-18 19:15:05,082 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707705082"}]},"ts":"1689707705082"} 2023-07-18 19:15:05,083 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-18 19:15:05,085 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-18 19:15:05,087 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 235 msec 2023-07-18 19:15:05,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-18 19:15:05,158 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-18 19:15:05,158 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_2042780629 2023-07-18 19:15:05,160 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_2042780629 2023-07-18 19:15:05,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:05,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2042780629 2023-07-18 19:15:05,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:05,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:15:05,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-18 19:15:05,172 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2042780629, current retry=0 2023-07-18 19:15:05,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_2042780629. 2023-07-18 19:15:05,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:05,175 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:05,176 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:05,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 19:15:05,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:05,180 INFO [Listener at localhost/40787] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 19:15:05,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 19:15:05,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:05,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 920 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:36768 deadline: 1689707765181, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-18 19:15:05,182 DEBUG [Listener at localhost/40787] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-18 19:15:05,182 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-18 19:15:05,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 19:15:05,185 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 19:15:05,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_2042780629' 2023-07-18 19:15:05,186 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 19:15:05,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:05,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2042780629 2023-07-18 19:15:05,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:05,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:15:05,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-18 19:15:05,193 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:05,193 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:05,193 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:05,193 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:05,193 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:05,195 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18/recovered.edits] 2023-07-18 19:15:05,195 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5/recovered.edits] 2023-07-18 19:15:05,195 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d/recovered.edits] 2023-07-18 19:15:05,195 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001/recovered.edits] 2023-07-18 19:15:05,195 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1/f, FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1/recovered.edits] 2023-07-18 19:15:05,203 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18/recovered.edits/4.seqid 2023-07-18 19:15:05,203 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5/recovered.edits/4.seqid 2023-07-18 19:15:05,203 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d/recovered.edits/4.seqid 2023-07-18 19:15:05,203 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001/recovered.edits/4.seqid 2023-07-18 19:15:05,203 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/792e287eae4d3ed13fc19cf3418fab18 2023-07-18 19:15:05,204 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/5d0f2f4733f3e79e1cd236e7ae156ff5 2023-07-18 19:15:05,204 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/8070ea769d786df5f3f108b3f07e479d 2023-07-18 19:15:05,204 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1/recovered.edits/4.seqid to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/archive/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1/recovered.edits/4.seqid 2023-07-18 19:15:05,204 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/eae77a063b55ab99c2cc64b694025001 2023-07-18 19:15:05,204 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/.tmp/data/default/Group_testDisabledTableMove/f7ba8ee9a68ff4c641d3a1737e0f33b1 2023-07-18 19:15:05,205 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 19:15:05,207 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 19:15:05,209 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-18 19:15:05,213 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-18 19:15:05,214 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 19:15:05,214 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-18 19:15:05,215 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707705214"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:05,215 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707705214"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:05,215 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707705214"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:05,215 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707705214"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:05,215 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707705214"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:05,216 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 19:15:05,216 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 792e287eae4d3ed13fc19cf3418fab18, NAME => 'Group_testDisabledTableMove,,1689707704222.792e287eae4d3ed13fc19cf3418fab18.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 8070ea769d786df5f3f108b3f07e479d, NAME => 'Group_testDisabledTableMove,aaaaa,1689707704222.8070ea769d786df5f3f108b3f07e479d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => eae77a063b55ab99c2cc64b694025001, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689707704222.eae77a063b55ab99c2cc64b694025001.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 5d0f2f4733f3e79e1cd236e7ae156ff5, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689707704222.5d0f2f4733f3e79e1cd236e7ae156ff5.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => f7ba8ee9a68ff4c641d3a1737e0f33b1, NAME => 'Group_testDisabledTableMove,zzzzz,1689707704222.f7ba8ee9a68ff4c641d3a1737e0f33b1.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 19:15:05,216 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-18 19:15:05,216 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689707705216"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:05,218 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-18 19:15:05,219 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 19:15:05,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 36 msec 2023-07-18 19:15:05,253 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-18 19:15:05,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-18 19:15:05,292 INFO [Listener at localhost/40787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-18 19:15:05,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:05,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:05,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:05,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:05,296 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:05,296 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561] to rsgroup default 2023-07-18 19:15:05,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:05,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2042780629 2023-07-18 19:15:05,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:05,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:15:05,300 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2042780629, current retry=0 2023-07-18 19:15:05,301 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,36387,1689707679286, jenkins-hbase4.apache.org,39561,1689707679120] are moved back to Group_testDisabledTableMove_2042780629 2023-07-18 19:15:05,301 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_2042780629 => default 2023-07-18 19:15:05,301 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:05,301 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_2042780629 2023-07-18 19:15:05,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:05,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:05,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 19:15:05,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:05,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:05,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:05,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:05,309 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:05,309 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:05,310 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:05,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:05,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:05,314 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:05,317 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:05,317 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:05,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:05,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:05,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:05,322 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:05,325 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:05,325 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:05,326 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:15:05,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:05,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 954 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708905326, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:15:05,327 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:05,328 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:05,329 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:05,329 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:05,329 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:05,330 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:05,330 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:05,347 INFO [Listener at localhost/40787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=527 (was 524) Potentially hanging thread: hconnection-0x394eed7c-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1798080720_17 at /127.0.0.1:47352 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5f700c8a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1144594579_17 at /127.0.0.1:43256 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=817 (was 803) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=336 (was 336), ProcessCount=173 (was 173), AvailableMemoryMB=2873 (was 2875) 2023-07-18 19:15:05,347 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=527 is superior to 500 2023-07-18 19:15:05,364 INFO [Listener at localhost/40787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=527, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=336, ProcessCount=173, AvailableMemoryMB=2872 2023-07-18 19:15:05,364 WARN [Listener at localhost/40787] hbase.ResourceChecker(130): Thread=527 is superior to 500 2023-07-18 19:15:05,364 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-18 19:15:05,367 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:05,367 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:05,368 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:05,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:05,368 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:05,368 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:05,368 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:05,369 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:05,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:05,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:05,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:05,383 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:05,384 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:05,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:05,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:05,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:05,389 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:05,391 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:05,391 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:05,393 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43617] to rsgroup master 2023-07-18 19:15:05,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:05,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] ipc.CallRunner(144): callId: 982 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:36768 deadline: 1689708905393, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. 2023-07-18 19:15:05,393 WARN [Listener at localhost/40787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43617 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:05,395 INFO [Listener at localhost/40787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:05,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:05,396 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:05,396 INFO [Listener at localhost/40787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:36387, jenkins-hbase4.apache.org:39561, jenkins-hbase4.apache.org:41417, jenkins-hbase4.apache.org:44751], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:05,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:05,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43617] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:05,398 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 19:15:05,398 INFO [Listener at localhost/40787] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 19:15:05,398 DEBUG [Listener at localhost/40787] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x37ac0919 to 127.0.0.1:62147 2023-07-18 19:15:05,398 DEBUG [Listener at localhost/40787] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,399 DEBUG [Listener at localhost/40787] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 19:15:05,399 DEBUG [Listener at localhost/40787] util.JVMClusterUtil(257): Found active master hash=1182890613, stopped=false 2023-07-18 19:15:05,399 DEBUG [Listener at localhost/40787] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 19:15:05,399 DEBUG [Listener at localhost/40787] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 19:15:05,399 INFO [Listener at localhost/40787] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:15:05,402 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:05,402 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:05,402 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:05,402 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:05,402 INFO [Listener at localhost/40787] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 19:15:05,402 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:05,402 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:05,403 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:05,403 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:05,403 DEBUG [Listener at localhost/40787] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b1671e2 to 127.0.0.1:62147 2023-07-18 19:15:05,404 DEBUG [Listener at localhost/40787] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,404 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:05,404 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:05,407 INFO [Listener at localhost/40787] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39561,1689707679120' ***** 2023-07-18 19:15:05,407 INFO [Listener at localhost/40787] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:05,407 INFO [Listener at localhost/40787] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41417,1689707679207' ***** 2023-07-18 19:15:05,406 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:05,407 INFO [Listener at localhost/40787] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:05,407 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:05,408 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:05,410 INFO [Listener at localhost/40787] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36387,1689707679286' ***** 2023-07-18 19:15:05,415 INFO [Listener at localhost/40787] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:05,417 INFO [Listener at localhost/40787] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44751,1689707683024' ***** 2023-07-18 19:15:05,418 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:05,420 INFO [Listener at localhost/40787] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:05,420 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:05,433 INFO [RS:0;jenkins-hbase4:39561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@51e5ec5c{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:05,433 INFO [RS:3;jenkins-hbase4:44751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@422d8bf2{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:05,433 INFO [RS:2;jenkins-hbase4:36387] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@13359f3{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:05,433 INFO [RS:1;jenkins-hbase4:41417] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@141febe3{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:05,439 INFO [RS:3;jenkins-hbase4:44751] server.AbstractConnector(383): Stopped ServerConnector@3e629c81{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:05,439 INFO [RS:3;jenkins-hbase4:44751] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:05,440 INFO [RS:2;jenkins-hbase4:36387] server.AbstractConnector(383): Stopped ServerConnector@86ca53{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:05,440 INFO [RS:3;jenkins-hbase4:44751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e42e83e{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:05,440 INFO [RS:1;jenkins-hbase4:41417] server.AbstractConnector(383): Stopped ServerConnector@4a909f08{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:05,440 INFO [RS:0;jenkins-hbase4:39561] server.AbstractConnector(383): Stopped ServerConnector@147b8d82{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:05,441 INFO [RS:1;jenkins-hbase4:41417] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:05,440 INFO [RS:2;jenkins-hbase4:36387] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:05,441 INFO [RS:0;jenkins-hbase4:39561] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:05,442 INFO [RS:3;jenkins-hbase4:44751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3652f836{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:05,442 INFO [RS:0;jenkins-hbase4:39561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@21f5379f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:05,442 INFO [RS:2;jenkins-hbase4:36387] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c2c946a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:05,442 INFO [RS:1;jenkins-hbase4:41417] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d6b47ba{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:05,443 INFO [RS:0;jenkins-hbase4:39561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3b5a29b4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:05,443 INFO [RS:2;jenkins-hbase4:36387] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7abf9a1c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:05,445 INFO [RS:1;jenkins-hbase4:41417] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6eb4fc00{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:05,446 INFO [RS:2;jenkins-hbase4:36387] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:05,447 INFO [RS:2;jenkins-hbase4:36387] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:05,447 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:05,447 INFO [RS:2;jenkins-hbase4:36387] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:05,447 INFO [RS:0;jenkins-hbase4:39561] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:05,447 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:15:05,447 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:05,447 DEBUG [RS:2;jenkins-hbase4:36387] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x65553943 to 127.0.0.1:62147 2023-07-18 19:15:05,448 DEBUG [RS:2;jenkins-hbase4:36387] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,448 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36387,1689707679286; all regions closed. 2023-07-18 19:15:05,448 INFO [RS:3;jenkins-hbase4:44751] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:05,448 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:05,448 INFO [RS:3;jenkins-hbase4:44751] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:05,448 INFO [RS:0;jenkins-hbase4:39561] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:05,448 INFO [RS:3;jenkins-hbase4:44751] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:05,449 INFO [RS:1;jenkins-hbase4:41417] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:05,449 INFO [RS:1;jenkins-hbase4:41417] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:05,449 INFO [RS:1;jenkins-hbase4:41417] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:05,449 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(3305): Received CLOSE for c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:05,448 INFO [RS:0;jenkins-hbase4:39561] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:05,449 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(3305): Received CLOSE for 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:05,449 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:05,449 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:15:05,450 DEBUG [RS:0;jenkins-hbase4:39561] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0e0e1d9b to 127.0.0.1:62147 2023-07-18 19:15:05,450 DEBUG [RS:0;jenkins-hbase4:39561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,450 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:05,450 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39561,1689707679120; all regions closed. 2023-07-18 19:15:05,450 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(3305): Received CLOSE for 7af7ba814960ac41543f63d97428e575 2023-07-18 19:15:05,450 DEBUG [RS:1;jenkins-hbase4:41417] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x504f2fba to 127.0.0.1:62147 2023-07-18 19:15:05,451 DEBUG [RS:1;jenkins-hbase4:41417] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9d7c43afbc30b4fb381514d1ccc4d668, disabling compactions & flushes 2023-07-18 19:15:05,451 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 19:15:05,451 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1478): Online Regions={c29891d66bd2ca5aa0c94f69449e63a5=testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5.} 2023-07-18 19:15:05,450 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(3305): Received CLOSE for 13ce679c9b6de2684bc3af2f72b426ea 2023-07-18 19:15:05,451 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:05,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c29891d66bd2ca5aa0c94f69449e63a5, disabling compactions & flushes 2023-07-18 19:15:05,452 DEBUG [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1504): Waiting on c29891d66bd2ca5aa0c94f69449e63a5 2023-07-18 19:15:05,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:05,452 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. after waiting 0 ms 2023-07-18 19:15:05,452 DEBUG [RS:3;jenkins-hbase4:44751] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3270d993 to 127.0.0.1:62147 2023-07-18 19:15:05,452 DEBUG [RS:3;jenkins-hbase4:44751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:05,452 INFO [RS:3;jenkins-hbase4:44751] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:05,452 INFO [RS:3;jenkins-hbase4:44751] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:05,453 INFO [RS:3;jenkins-hbase4:44751] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:05,453 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 19:15:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:05,452 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. after waiting 0 ms 2023-07-18 19:15:05,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:05,453 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-18 19:15:05,453 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1478): Online Regions={9d7c43afbc30b4fb381514d1ccc4d668=unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668., 7af7ba814960ac41543f63d97428e575=hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575., 1588230740=hbase:meta,,1.1588230740, 13ce679c9b6de2684bc3af2f72b426ea=hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea.} 2023-07-18 19:15:05,453 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 19:15:05,453 DEBUG [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1504): Waiting on 13ce679c9b6de2684bc3af2f72b426ea, 1588230740, 7af7ba814960ac41543f63d97428e575, 9d7c43afbc30b4fb381514d1ccc4d668 2023-07-18 19:15:05,453 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 19:15:05,453 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 19:15:05,453 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 19:15:05,453 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 19:15:05,454 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=36.31 KB heapSize=59.22 KB 2023-07-18 19:15:05,454 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:05,461 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:05,462 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:05,466 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:05,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/unmovedTable/9d7c43afbc30b4fb381514d1ccc4d668/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 19:15:05,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:05,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9d7c43afbc30b4fb381514d1ccc4d668: 2023-07-18 19:15:05,480 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689707700246.9d7c43afbc30b4fb381514d1ccc4d668. 2023-07-18 19:15:05,483 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7af7ba814960ac41543f63d97428e575, disabling compactions & flushes 2023-07-18 19:15:05,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:15:05,483 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:15:05,483 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. after waiting 0 ms 2023-07-18 19:15:05,483 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:15:05,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7af7ba814960ac41543f63d97428e575 1/1 column families, dataSize=27.09 KB heapSize=44.70 KB 2023-07-18 19:15:05,483 DEBUG [RS:0;jenkins-hbase4:39561] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs 2023-07-18 19:15:05,483 INFO [RS:0;jenkins-hbase4:39561] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39561%2C1689707679120:(num 1689707681189) 2023-07-18 19:15:05,483 DEBUG [RS:0;jenkins-hbase4:39561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,483 INFO [RS:0;jenkins-hbase4:39561] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:05,484 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/default/testRename/c29891d66bd2ca5aa0c94f69449e63a5/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 19:15:05,484 INFO [RS:0;jenkins-hbase4:39561] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:05,484 INFO [RS:0;jenkins-hbase4:39561] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:05,484 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:05,484 INFO [RS:0;jenkins-hbase4:39561] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:05,485 INFO [RS:0;jenkins-hbase4:39561] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:05,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:05,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c29891d66bd2ca5aa0c94f69449e63a5: 2023-07-18 19:15:05,485 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689707698578.c29891d66bd2ca5aa0c94f69449e63a5. 2023-07-18 19:15:05,486 INFO [RS:0;jenkins-hbase4:39561] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39561 2023-07-18 19:15:05,487 DEBUG [RS:2;jenkins-hbase4:36387] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs 2023-07-18 19:15:05,487 INFO [RS:2;jenkins-hbase4:36387] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36387%2C1689707679286.meta:.meta(num 1689707681499) 2023-07-18 19:15:05,501 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:05,501 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:15:05,502 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:05,502 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:15:05,502 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:05,501 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:15:05,502 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:05,502 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39561,1689707679120] 2023-07-18 19:15:05,502 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39561,1689707679120 2023-07-18 19:15:05,502 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39561,1689707679120; numProcessing=1 2023-07-18 19:15:05,502 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:05,504 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39561,1689707679120 already deleted, retry=false 2023-07-18 19:15:05,505 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39561,1689707679120 expired; onlineServers=3 2023-07-18 19:15:05,512 DEBUG [RS:2;jenkins-hbase4:36387] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs 2023-07-18 19:15:05,512 INFO [RS:2;jenkins-hbase4:36387] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36387%2C1689707679286:(num 1689707681183) 2023-07-18 19:15:05,512 DEBUG [RS:2;jenkins-hbase4:36387] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,512 INFO [RS:2;jenkins-hbase4:36387] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:05,520 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=33.39 KB at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/info/5bf05fd7a0a047d98a75d1954a6d4727 2023-07-18 19:15:05,520 INFO [RS:2;jenkins-hbase4:36387] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:05,531 INFO [RS:2;jenkins-hbase4:36387] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:05,531 INFO [RS:2;jenkins-hbase4:36387] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:05,531 INFO [RS:2;jenkins-hbase4:36387] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:05,531 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:05,533 INFO [RS:2;jenkins-hbase4:36387] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36387 2023-07-18 19:15:05,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.09 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/.tmp/m/83811587676c41969ecc64b9e9af6df3 2023-07-18 19:15:05,536 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:15:05,536 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:05,536 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:15:05,536 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36387,1689707679286 2023-07-18 19:15:05,537 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5bf05fd7a0a047d98a75d1954a6d4727 2023-07-18 19:15:05,537 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36387,1689707679286] 2023-07-18 19:15:05,537 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36387,1689707679286; numProcessing=2 2023-07-18 19:15:05,539 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36387,1689707679286 already deleted, retry=false 2023-07-18 19:15:05,539 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36387,1689707679286 expired; onlineServers=2 2023-07-18 19:15:05,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 83811587676c41969ecc64b9e9af6df3 2023-07-18 19:15:05,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/.tmp/m/83811587676c41969ecc64b9e9af6df3 as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m/83811587676c41969ecc64b9e9af6df3 2023-07-18 19:15:05,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 83811587676c41969ecc64b9e9af6df3 2023-07-18 19:15:05,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/m/83811587676c41969ecc64b9e9af6df3, entries=28, sequenceid=101, filesize=6.1 K 2023-07-18 19:15:05,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.09 KB/27740, heapSize ~44.68 KB/45752, currentSize=0 B/0 for 7af7ba814960ac41543f63d97428e575 in 67ms, sequenceid=101, compaction requested=false 2023-07-18 19:15:05,558 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/rep_barrier/0b2af7e08ba147f08d292ed8b4ab95bf 2023-07-18 19:15:05,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/rsgroup/7af7ba814960ac41543f63d97428e575/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-18 19:15:05,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:15:05,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:15:05,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7af7ba814960ac41543f63d97428e575: 2023-07-18 19:15:05,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689707681916.7af7ba814960ac41543f63d97428e575. 2023-07-18 19:15:05,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 13ce679c9b6de2684bc3af2f72b426ea, disabling compactions & flushes 2023-07-18 19:15:05,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:15:05,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:15:05,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. after waiting 0 ms 2023-07-18 19:15:05,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:15:05,568 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0b2af7e08ba147f08d292ed8b4ab95bf 2023-07-18 19:15:05,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/namespace/13ce679c9b6de2684bc3af2f72b426ea/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-18 19:15:05,572 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:15:05,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 13ce679c9b6de2684bc3af2f72b426ea: 2023-07-18 19:15:05,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689707681857.13ce679c9b6de2684bc3af2f72b426ea. 2023-07-18 19:15:05,581 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/table/44515f0cc385416ebab1df134314125b 2023-07-18 19:15:05,589 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 44515f0cc385416ebab1df134314125b 2023-07-18 19:15:05,589 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/info/5bf05fd7a0a047d98a75d1954a6d4727 as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/5bf05fd7a0a047d98a75d1954a6d4727 2023-07-18 19:15:05,595 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5bf05fd7a0a047d98a75d1954a6d4727 2023-07-18 19:15:05,595 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/info/5bf05fd7a0a047d98a75d1954a6d4727, entries=52, sequenceid=216, filesize=10.7 K 2023-07-18 19:15:05,596 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/rep_barrier/0b2af7e08ba147f08d292ed8b4ab95bf as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier/0b2af7e08ba147f08d292ed8b4ab95bf 2023-07-18 19:15:05,601 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0b2af7e08ba147f08d292ed8b4ab95bf 2023-07-18 19:15:05,601 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/rep_barrier/0b2af7e08ba147f08d292ed8b4ab95bf, entries=8, sequenceid=216, filesize=5.8 K 2023-07-18 19:15:05,602 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/.tmp/table/44515f0cc385416ebab1df134314125b as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/44515f0cc385416ebab1df134314125b 2023-07-18 19:15:05,608 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 44515f0cc385416ebab1df134314125b 2023-07-18 19:15:05,609 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/table/44515f0cc385416ebab1df134314125b, entries=16, sequenceid=216, filesize=6.0 K 2023-07-18 19:15:05,609 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~36.31 KB/37186, heapSize ~59.17 KB/60592, currentSize=0 B/0 for 1588230740 in 155ms, sequenceid=216, compaction requested=true 2023-07-18 19:15:05,628 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/data/hbase/meta/1588230740/recovered.edits/219.seqid, newMaxSeqId=219, maxSeqId=107 2023-07-18 19:15:05,629 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:15:05,629 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 19:15:05,629 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 19:15:05,629 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 19:15:05,652 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41417,1689707679207; all regions closed. 2023-07-18 19:15:05,653 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44751,1689707683024; all regions closed. 2023-07-18 19:15:05,667 DEBUG [RS:1;jenkins-hbase4:41417] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs 2023-07-18 19:15:05,667 INFO [RS:1;jenkins-hbase4:41417] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41417%2C1689707679207.meta:.meta(num 1689707684225) 2023-07-18 19:15:05,668 DEBUG [RS:3;jenkins-hbase4:44751] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs 2023-07-18 19:15:05,668 INFO [RS:3;jenkins-hbase4:44751] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44751%2C1689707683024.meta:.meta(num 1689707690936) 2023-07-18 19:15:05,680 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/WALs/jenkins-hbase4.apache.org,44751,1689707683024/jenkins-hbase4.apache.org%2C44751%2C1689707683024.1689707683336 not finished, retry = 0 2023-07-18 19:15:05,681 DEBUG [RS:1;jenkins-hbase4:41417] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs 2023-07-18 19:15:05,681 INFO [RS:1;jenkins-hbase4:41417] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41417%2C1689707679207:(num 1689707681184) 2023-07-18 19:15:05,681 DEBUG [RS:1;jenkins-hbase4:41417] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,681 INFO [RS:1;jenkins-hbase4:41417] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:05,682 INFO [RS:1;jenkins-hbase4:41417] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:05,682 INFO [RS:1;jenkins-hbase4:41417] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:05,682 INFO [RS:1;jenkins-hbase4:41417] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:05,682 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:05,682 INFO [RS:1;jenkins-hbase4:41417] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:05,683 INFO [RS:1;jenkins-hbase4:41417] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41417 2023-07-18 19:15:05,685 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:05,685 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41417,1689707679207 2023-07-18 19:15:05,685 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:05,687 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41417,1689707679207] 2023-07-18 19:15:05,687 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41417,1689707679207; numProcessing=3 2023-07-18 19:15:05,688 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41417,1689707679207 already deleted, retry=false 2023-07-18 19:15:05,688 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41417,1689707679207 expired; onlineServers=1 2023-07-18 19:15:05,783 DEBUG [RS:3;jenkins-hbase4:44751] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/oldWALs 2023-07-18 19:15:05,783 INFO [RS:3;jenkins-hbase4:44751] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44751%2C1689707683024:(num 1689707683336) 2023-07-18 19:15:05,783 DEBUG [RS:3;jenkins-hbase4:44751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,783 INFO [RS:3;jenkins-hbase4:44751] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:05,784 INFO [RS:3;jenkins-hbase4:44751] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:05,784 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:05,785 INFO [RS:3;jenkins-hbase4:44751] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44751 2023-07-18 19:15:05,787 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44751,1689707683024 2023-07-18 19:15:05,787 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:05,788 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44751,1689707683024] 2023-07-18 19:15:05,788 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44751,1689707683024; numProcessing=4 2023-07-18 19:15:05,790 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44751,1689707683024 already deleted, retry=false 2023-07-18 19:15:05,790 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44751,1689707683024 expired; onlineServers=0 2023-07-18 19:15:05,790 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43617,1689707677179' ***** 2023-07-18 19:15:05,790 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 19:15:05,791 DEBUG [M:0;jenkins-hbase4:43617] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7bc704e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:05,791 INFO [M:0;jenkins-hbase4:43617] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:05,793 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:05,793 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:05,793 INFO [M:0;jenkins-hbase4:43617] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@14922087{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 19:15:05,794 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:05,794 INFO [M:0;jenkins-hbase4:43617] server.AbstractConnector(383): Stopped ServerConnector@7a905274{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:05,794 INFO [M:0;jenkins-hbase4:43617] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:05,794 INFO [M:0;jenkins-hbase4:43617] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ce7501{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:05,795 INFO [M:0;jenkins-hbase4:43617] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3a1c2514{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:05,795 INFO [M:0;jenkins-hbase4:43617] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43617,1689707677179 2023-07-18 19:15:05,795 INFO [M:0;jenkins-hbase4:43617] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43617,1689707677179; all regions closed. 2023-07-18 19:15:05,795 DEBUG [M:0;jenkins-hbase4:43617] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:05,795 INFO [M:0;jenkins-hbase4:43617] master.HMaster(1491): Stopping master jetty server 2023-07-18 19:15:05,796 INFO [M:0;jenkins-hbase4:43617] server.AbstractConnector(383): Stopped ServerConnector@4fd4c8e1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:05,796 DEBUG [M:0;jenkins-hbase4:43617] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 19:15:05,796 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 19:15:05,796 DEBUG [M:0;jenkins-hbase4:43617] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 19:15:05,796 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707680802] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707680802,5,FailOnTimeoutGroup] 2023-07-18 19:15:05,796 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707680802] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707680802,5,FailOnTimeoutGroup] 2023-07-18 19:15:05,797 INFO [M:0;jenkins-hbase4:43617] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 19:15:05,797 INFO [M:0;jenkins-hbase4:43617] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 19:15:05,797 INFO [M:0;jenkins-hbase4:43617] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 19:15:05,797 DEBUG [M:0;jenkins-hbase4:43617] master.HMaster(1512): Stopping service threads 2023-07-18 19:15:05,797 INFO [M:0;jenkins-hbase4:43617] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 19:15:05,797 ERROR [M:0;jenkins-hbase4:43617] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-18 19:15:05,798 INFO [M:0;jenkins-hbase4:43617] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 19:15:05,798 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 19:15:05,798 DEBUG [M:0;jenkins-hbase4:43617] zookeeper.ZKUtil(398): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 19:15:05,798 WARN [M:0;jenkins-hbase4:43617] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 19:15:05,798 INFO [M:0;jenkins-hbase4:43617] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 19:15:05,799 INFO [M:0;jenkins-hbase4:43617] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 19:15:05,799 DEBUG [M:0;jenkins-hbase4:43617] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 19:15:05,799 INFO [M:0;jenkins-hbase4:43617] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:05,799 DEBUG [M:0;jenkins-hbase4:43617] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:05,799 DEBUG [M:0;jenkins-hbase4:43617] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 19:15:05,799 DEBUG [M:0;jenkins-hbase4:43617] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:05,799 INFO [M:0;jenkins-hbase4:43617] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=528.48 KB heapSize=632.63 KB 2023-07-18 19:15:05,814 INFO [M:0;jenkins-hbase4:43617] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=528.48 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a2701efd0fbd432088cfa0b1b65248b6 2023-07-18 19:15:05,820 DEBUG [M:0;jenkins-hbase4:43617] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a2701efd0fbd432088cfa0b1b65248b6 as hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a2701efd0fbd432088cfa0b1b65248b6 2023-07-18 19:15:05,825 INFO [M:0;jenkins-hbase4:43617] regionserver.HStore(1080): Added hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a2701efd0fbd432088cfa0b1b65248b6, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-18 19:15:05,826 INFO [M:0;jenkins-hbase4:43617] regionserver.HRegion(2948): Finished flush of dataSize ~528.48 KB/541159, heapSize ~632.62 KB/647800, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=1176, compaction requested=false 2023-07-18 19:15:05,828 INFO [M:0;jenkins-hbase4:43617] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:05,828 DEBUG [M:0;jenkins-hbase4:43617] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:15:05,832 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:05,832 INFO [M:0;jenkins-hbase4:43617] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 19:15:05,833 INFO [M:0;jenkins-hbase4:43617] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43617 2023-07-18 19:15:05,834 DEBUG [M:0;jenkins-hbase4:43617] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43617,1689707677179 already deleted, retry=false 2023-07-18 19:15:06,103 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,103 INFO [M:0;jenkins-hbase4:43617] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43617,1689707677179; zookeeper connection closed. 2023-07-18 19:15:06,103 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): master:43617-0x10179db857e0000, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,203 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,204 INFO [RS:3;jenkins-hbase4:44751] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44751,1689707683024; zookeeper connection closed. 2023-07-18 19:15:06,204 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:44751-0x10179db857e000b, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,204 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@169edc14] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@169edc14 2023-07-18 19:15:06,304 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,304 INFO [RS:1;jenkins-hbase4:41417] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41417,1689707679207; zookeeper connection closed. 2023-07-18 19:15:06,304 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:41417-0x10179db857e0002, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,304 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2d36b8f5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2d36b8f5 2023-07-18 19:15:06,404 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,404 INFO [RS:2;jenkins-hbase4:36387] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36387,1689707679286; zookeeper connection closed. 2023-07-18 19:15:06,404 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:36387-0x10179db857e0003, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,405 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@66b8e41d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@66b8e41d 2023-07-18 19:15:06,504 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,504 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): regionserver:39561-0x10179db857e0001, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:06,504 INFO [RS:0;jenkins-hbase4:39561] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39561,1689707679120; zookeeper connection closed. 2023-07-18 19:15:06,505 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@562517f3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@562517f3 2023-07-18 19:15:06,505 INFO [Listener at localhost/40787] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 19:15:06,505 WARN [Listener at localhost/40787] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:06,537 INFO [Listener at localhost/40787] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:06,642 WARN [BP-302341202-172.31.14.131-1689707673371 heartbeating to localhost/127.0.0.1:44967] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:06,642 WARN [BP-302341202-172.31.14.131-1689707673371 heartbeating to localhost/127.0.0.1:44967] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-302341202-172.31.14.131-1689707673371 (Datanode Uuid a52f75a6-c146-4051-8542-c731e3261369) service to localhost/127.0.0.1:44967 2023-07-18 19:15:06,644 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data5/current/BP-302341202-172.31.14.131-1689707673371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:06,645 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data6/current/BP-302341202-172.31.14.131-1689707673371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:06,646 WARN [Listener at localhost/40787] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:06,651 INFO [Listener at localhost/40787] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:06,755 WARN [BP-302341202-172.31.14.131-1689707673371 heartbeating to localhost/127.0.0.1:44967] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:06,755 WARN [BP-302341202-172.31.14.131-1689707673371 heartbeating to localhost/127.0.0.1:44967] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-302341202-172.31.14.131-1689707673371 (Datanode Uuid 4ed0c958-2f47-4e92-90ec-43c457654399) service to localhost/127.0.0.1:44967 2023-07-18 19:15:06,756 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data3/current/BP-302341202-172.31.14.131-1689707673371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:06,756 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data4/current/BP-302341202-172.31.14.131-1689707673371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:06,758 WARN [Listener at localhost/40787] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:06,762 INFO [Listener at localhost/40787] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:06,865 WARN [BP-302341202-172.31.14.131-1689707673371 heartbeating to localhost/127.0.0.1:44967] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:06,865 WARN [BP-302341202-172.31.14.131-1689707673371 heartbeating to localhost/127.0.0.1:44967] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-302341202-172.31.14.131-1689707673371 (Datanode Uuid b5c05d4a-6a19-4843-b69c-c57368f68df2) service to localhost/127.0.0.1:44967 2023-07-18 19:15:06,866 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data1/current/BP-302341202-172.31.14.131-1689707673371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:06,866 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/cluster_94f70152-535b-05f0-9a58-4769e1440a34/dfs/data/data2/current/BP-302341202-172.31.14.131-1689707673371] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:06,905 INFO [Listener at localhost/40787] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:07,026 INFO [Listener at localhost/40787] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 19:15:07,076 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 19:15:07,076 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 19:15:07,076 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.log.dir so I do NOT create it in target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/334d6407-2c30-32aa-a5a9-70c6b33d86d5/hadoop.tmp.dir so I do NOT create it in target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f, deleteOnExit=true 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/test.cache.data in system properties and HBase conf 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir in system properties and HBase conf 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 19:15:07,077 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 19:15:07,077 DEBUG [Listener at localhost/40787] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 19:15:07,078 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 19:15:07,079 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/nfs.dump.dir in system properties and HBase conf 2023-07-18 19:15:07,079 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/java.io.tmpdir in system properties and HBase conf 2023-07-18 19:15:07,079 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 19:15:07,079 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 19:15:07,079 INFO [Listener at localhost/40787] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 19:15:07,085 WARN [Listener at localhost/40787] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 19:15:07,085 WARN [Listener at localhost/40787] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 19:15:07,122 DEBUG [Listener at localhost/40787-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10179db857e000a, quorum=127.0.0.1:62147, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 19:15:07,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10179db857e000a, quorum=127.0.0.1:62147, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 19:15:07,128 WARN [Listener at localhost/40787] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 19:15:07,180 WARN [Listener at localhost/40787] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:15:07,182 INFO [Listener at localhost/40787] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:15:07,189 INFO [Listener at localhost/40787] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/java.io.tmpdir/Jetty_localhost_35723_hdfs____.v13mf4/webapp 2023-07-18 19:15:07,291 INFO [Listener at localhost/40787] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35723 2023-07-18 19:15:07,296 WARN [Listener at localhost/40787] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 19:15:07,296 WARN [Listener at localhost/40787] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 19:15:07,341 WARN [Listener at localhost/37601] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:15:07,355 WARN [Listener at localhost/37601] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:15:07,358 WARN [Listener at localhost/37601] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:15:07,359 INFO [Listener at localhost/37601] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:15:07,365 INFO [Listener at localhost/37601] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/java.io.tmpdir/Jetty_localhost_39707_datanode____.fpbxs1/webapp 2023-07-18 19:15:07,469 INFO [Listener at localhost/37601] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39707 2023-07-18 19:15:07,479 WARN [Listener at localhost/36453] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:15:07,510 WARN [Listener at localhost/36453] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:15:07,513 WARN [Listener at localhost/36453] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:15:07,514 INFO [Listener at localhost/36453] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:15:07,518 INFO [Listener at localhost/36453] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/java.io.tmpdir/Jetty_localhost_37743_datanode____.cunton/webapp 2023-07-18 19:15:07,569 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:15:07,570 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 19:15:07,570 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 19:15:07,665 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfa71043e26652994: Processing first storage report for DS-67736212-aa37-485a-9aee-6fed781fe9e1 from datanode bedbe0fa-81c5-44f7-a306-704de14cbc79 2023-07-18 19:15:07,665 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfa71043e26652994: from storage DS-67736212-aa37-485a-9aee-6fed781fe9e1 node DatanodeRegistration(127.0.0.1:43621, datanodeUuid=bedbe0fa-81c5-44f7-a306-704de14cbc79, infoPort=36683, infoSecurePort=0, ipcPort=36453, storageInfo=lv=-57;cid=testClusterID;nsid=704434845;c=1689707707088), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:07,665 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfa71043e26652994: Processing first storage report for DS-8c8ef8d6-7b30-4f2a-8939-224c516f83b7 from datanode bedbe0fa-81c5-44f7-a306-704de14cbc79 2023-07-18 19:15:07,665 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfa71043e26652994: from storage DS-8c8ef8d6-7b30-4f2a-8939-224c516f83b7 node DatanodeRegistration(127.0.0.1:43621, datanodeUuid=bedbe0fa-81c5-44f7-a306-704de14cbc79, infoPort=36683, infoSecurePort=0, ipcPort=36453, storageInfo=lv=-57;cid=testClusterID;nsid=704434845;c=1689707707088), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 19:15:07,683 INFO [Listener at localhost/36453] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37743 2023-07-18 19:15:07,692 WARN [Listener at localhost/37461] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:15:07,714 WARN [Listener at localhost/37461] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:15:07,717 WARN [Listener at localhost/37461] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:15:07,718 INFO [Listener at localhost/37461] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:15:07,730 INFO [Listener at localhost/37461] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/java.io.tmpdir/Jetty_localhost_45235_datanode____.sk2cva/webapp 2023-07-18 19:15:07,852 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xab084411d88fb632: Processing first storage report for DS-bd2a2a59-21ef-4725-b70d-16ed191b0706 from datanode 7a5fe404-5b8d-41f2-8c5a-a605fcecbe34 2023-07-18 19:15:07,852 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xab084411d88fb632: from storage DS-bd2a2a59-21ef-4725-b70d-16ed191b0706 node DatanodeRegistration(127.0.0.1:38887, datanodeUuid=7a5fe404-5b8d-41f2-8c5a-a605fcecbe34, infoPort=40191, infoSecurePort=0, ipcPort=37461, storageInfo=lv=-57;cid=testClusterID;nsid=704434845;c=1689707707088), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:07,852 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xab084411d88fb632: Processing first storage report for DS-6cc9ff73-6d8e-4534-8204-fdc098a52bd8 from datanode 7a5fe404-5b8d-41f2-8c5a-a605fcecbe34 2023-07-18 19:15:07,852 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xab084411d88fb632: from storage DS-6cc9ff73-6d8e-4534-8204-fdc098a52bd8 node DatanodeRegistration(127.0.0.1:38887, datanodeUuid=7a5fe404-5b8d-41f2-8c5a-a605fcecbe34, infoPort=40191, infoSecurePort=0, ipcPort=37461, storageInfo=lv=-57;cid=testClusterID;nsid=704434845;c=1689707707088), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:07,881 INFO [Listener at localhost/37461] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45235 2023-07-18 19:15:07,893 WARN [Listener at localhost/45101] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:15:08,019 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcaa1ff58bbb2a18a: Processing first storage report for DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99 from datanode 11ca71b2-40d8-4f05-ad30-fa1eada172d9 2023-07-18 19:15:08,019 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcaa1ff58bbb2a18a: from storage DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99 node DatanodeRegistration(127.0.0.1:39355, datanodeUuid=11ca71b2-40d8-4f05-ad30-fa1eada172d9, infoPort=45183, infoSecurePort=0, ipcPort=45101, storageInfo=lv=-57;cid=testClusterID;nsid=704434845;c=1689707707088), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 19:15:08,020 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcaa1ff58bbb2a18a: Processing first storage report for DS-0f66555c-962c-4077-bcf5-03fbc60698b1 from datanode 11ca71b2-40d8-4f05-ad30-fa1eada172d9 2023-07-18 19:15:08,020 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcaa1ff58bbb2a18a: from storage DS-0f66555c-962c-4077-bcf5-03fbc60698b1 node DatanodeRegistration(127.0.0.1:39355, datanodeUuid=11ca71b2-40d8-4f05-ad30-fa1eada172d9, infoPort=45183, infoSecurePort=0, ipcPort=45101, storageInfo=lv=-57;cid=testClusterID;nsid=704434845;c=1689707707088), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:08,110 DEBUG [Listener at localhost/45101] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e 2023-07-18 19:15:08,119 INFO [Listener at localhost/45101] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/zookeeper_0, clientPort=59566, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 19:15:08,122 INFO [Listener at localhost/45101] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59566 2023-07-18 19:15:08,123 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,124 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,155 INFO [Listener at localhost/45101] util.FSUtils(471): Created version file at hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962 with version=8 2023-07-18 19:15:08,155 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/hbase-staging 2023-07-18 19:15:08,156 DEBUG [Listener at localhost/45101] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 19:15:08,156 DEBUG [Listener at localhost/45101] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 19:15:08,157 DEBUG [Listener at localhost/45101] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 19:15:08,157 DEBUG [Listener at localhost/45101] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 19:15:08,157 INFO [Listener at localhost/45101] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:08,158 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,158 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,158 INFO [Listener at localhost/45101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:08,158 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,158 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:08,158 INFO [Listener at localhost/45101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:08,159 INFO [Listener at localhost/45101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42481 2023-07-18 19:15:08,160 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,161 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,162 INFO [Listener at localhost/45101] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42481 connecting to ZooKeeper ensemble=127.0.0.1:59566 2023-07-18 19:15:08,172 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:424810x0, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:08,174 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42481-0x10179dc01d40000 connected 2023-07-18 19:15:08,187 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:08,188 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:08,188 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:08,196 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42481 2023-07-18 19:15:08,196 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42481 2023-07-18 19:15:08,200 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42481 2023-07-18 19:15:08,201 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42481 2023-07-18 19:15:08,201 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42481 2023-07-18 19:15:08,203 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:08,204 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:08,204 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:08,204 INFO [Listener at localhost/45101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 19:15:08,204 INFO [Listener at localhost/45101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:08,205 INFO [Listener at localhost/45101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:08,205 INFO [Listener at localhost/45101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:08,205 INFO [Listener at localhost/45101] http.HttpServer(1146): Jetty bound to port 37539 2023-07-18 19:15:08,205 INFO [Listener at localhost/45101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:08,209 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,209 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@49e4a08f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:08,210 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,210 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6682e202{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:08,220 INFO [Listener at localhost/45101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:08,221 INFO [Listener at localhost/45101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:08,222 INFO [Listener at localhost/45101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:08,222 INFO [Listener at localhost/45101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:15:08,225 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,226 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5b426ad{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 19:15:08,228 INFO [Listener at localhost/45101] server.AbstractConnector(333): Started ServerConnector@1539694d{HTTP/1.1, (http/1.1)}{0.0.0.0:37539} 2023-07-18 19:15:08,228 INFO [Listener at localhost/45101] server.Server(415): Started @36931ms 2023-07-18 19:15:08,228 INFO [Listener at localhost/45101] master.HMaster(444): hbase.rootdir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962, hbase.cluster.distributed=false 2023-07-18 19:15:08,244 INFO [Listener at localhost/45101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:08,244 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,244 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,245 INFO [Listener at localhost/45101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:08,245 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,245 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:08,245 INFO [Listener at localhost/45101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:08,246 INFO [Listener at localhost/45101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34883 2023-07-18 19:15:08,246 INFO [Listener at localhost/45101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:15:08,247 DEBUG [Listener at localhost/45101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:15:08,248 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,249 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,250 INFO [Listener at localhost/45101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34883 connecting to ZooKeeper ensemble=127.0.0.1:59566 2023-07-18 19:15:08,259 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:348830x0, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:08,260 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:348830x0, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:08,261 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34883-0x10179dc01d40001 connected 2023-07-18 19:15:08,261 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:08,262 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:08,262 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34883 2023-07-18 19:15:08,262 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34883 2023-07-18 19:15:08,263 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34883 2023-07-18 19:15:08,264 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34883 2023-07-18 19:15:08,264 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34883 2023-07-18 19:15:08,267 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:08,267 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:08,267 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:08,268 INFO [Listener at localhost/45101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:15:08,268 INFO [Listener at localhost/45101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:08,268 INFO [Listener at localhost/45101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:08,268 INFO [Listener at localhost/45101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:08,269 INFO [Listener at localhost/45101] http.HttpServer(1146): Jetty bound to port 40749 2023-07-18 19:15:08,269 INFO [Listener at localhost/45101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:08,272 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,273 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@981894e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:08,273 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,273 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d4490e8{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:08,281 INFO [Listener at localhost/45101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:08,282 INFO [Listener at localhost/45101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:08,282 INFO [Listener at localhost/45101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:08,282 INFO [Listener at localhost/45101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:15:08,283 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,284 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@24a6ace2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:08,286 INFO [Listener at localhost/45101] server.AbstractConnector(333): Started ServerConnector@770aaad2{HTTP/1.1, (http/1.1)}{0.0.0.0:40749} 2023-07-18 19:15:08,286 INFO [Listener at localhost/45101] server.Server(415): Started @36989ms 2023-07-18 19:15:08,300 INFO [Listener at localhost/45101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:08,300 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,300 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,300 INFO [Listener at localhost/45101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:08,300 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,300 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:08,300 INFO [Listener at localhost/45101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:08,301 INFO [Listener at localhost/45101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39351 2023-07-18 19:15:08,301 INFO [Listener at localhost/45101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:15:08,303 DEBUG [Listener at localhost/45101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:15:08,304 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,305 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,306 INFO [Listener at localhost/45101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39351 connecting to ZooKeeper ensemble=127.0.0.1:59566 2023-07-18 19:15:08,314 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:393510x0, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:08,315 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:393510x0, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:08,317 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39351-0x10179dc01d40002 connected 2023-07-18 19:15:08,318 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:08,319 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:08,319 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39351 2023-07-18 19:15:08,319 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39351 2023-07-18 19:15:08,322 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39351 2023-07-18 19:15:08,323 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39351 2023-07-18 19:15:08,323 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39351 2023-07-18 19:15:08,326 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:08,326 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:08,326 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:08,327 INFO [Listener at localhost/45101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:15:08,327 INFO [Listener at localhost/45101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:08,327 INFO [Listener at localhost/45101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:08,327 INFO [Listener at localhost/45101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:08,328 INFO [Listener at localhost/45101] http.HttpServer(1146): Jetty bound to port 46287 2023-07-18 19:15:08,328 INFO [Listener at localhost/45101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:08,332 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,333 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@120830ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:08,333 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,334 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d317869{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:08,341 INFO [Listener at localhost/45101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:08,342 INFO [Listener at localhost/45101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:08,343 INFO [Listener at localhost/45101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:08,343 INFO [Listener at localhost/45101] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 19:15:08,344 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,345 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@40db2b1e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:08,347 INFO [Listener at localhost/45101] server.AbstractConnector(333): Started ServerConnector@47efc0a5{HTTP/1.1, (http/1.1)}{0.0.0.0:46287} 2023-07-18 19:15:08,347 INFO [Listener at localhost/45101] server.Server(415): Started @37050ms 2023-07-18 19:15:08,360 INFO [Listener at localhost/45101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:08,361 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,361 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,361 INFO [Listener at localhost/45101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:08,361 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:08,361 INFO [Listener at localhost/45101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:08,361 INFO [Listener at localhost/45101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:08,362 INFO [Listener at localhost/45101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37007 2023-07-18 19:15:08,363 INFO [Listener at localhost/45101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:15:08,364 DEBUG [Listener at localhost/45101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:15:08,365 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,367 INFO [Listener at localhost/45101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,368 INFO [Listener at localhost/45101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37007 connecting to ZooKeeper ensemble=127.0.0.1:59566 2023-07-18 19:15:08,373 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:370070x0, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:08,375 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:370070x0, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:08,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37007-0x10179dc01d40003 connected 2023-07-18 19:15:08,376 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:08,377 DEBUG [Listener at localhost/45101] zookeeper.ZKUtil(164): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:08,378 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37007 2023-07-18 19:15:08,379 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37007 2023-07-18 19:15:08,379 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37007 2023-07-18 19:15:08,380 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37007 2023-07-18 19:15:08,380 DEBUG [Listener at localhost/45101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37007 2023-07-18 19:15:08,382 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:08,382 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:08,383 INFO [Listener at localhost/45101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:08,383 INFO [Listener at localhost/45101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:15:08,384 INFO [Listener at localhost/45101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:08,384 INFO [Listener at localhost/45101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:08,384 INFO [Listener at localhost/45101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:08,385 INFO [Listener at localhost/45101] http.HttpServer(1146): Jetty bound to port 36755 2023-07-18 19:15:08,385 INFO [Listener at localhost/45101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:08,391 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,391 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d8d5276{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:08,392 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,392 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3547860d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:08,397 INFO [Listener at localhost/45101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:08,398 INFO [Listener at localhost/45101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:08,398 INFO [Listener at localhost/45101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:08,398 INFO [Listener at localhost/45101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:15:08,399 INFO [Listener at localhost/45101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:08,400 INFO [Listener at localhost/45101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@371a9a21{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:08,401 INFO [Listener at localhost/45101] server.AbstractConnector(333): Started ServerConnector@4411c3c2{HTTP/1.1, (http/1.1)}{0.0.0.0:36755} 2023-07-18 19:15:08,401 INFO [Listener at localhost/45101] server.Server(415): Started @37104ms 2023-07-18 19:15:08,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:08,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@249204b6{HTTP/1.1, (http/1.1)}{0.0.0.0:33713} 2023-07-18 19:15:08,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37125ms 2023-07-18 19:15:08,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:08,423 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 19:15:08,424 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:08,426 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:08,426 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:08,426 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:08,426 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:08,426 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:08,429 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 19:15:08,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42481,1689707708157 from backup master directory 2023-07-18 19:15:08,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 19:15:08,432 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:08,432 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:08,432 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 19:15:08,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:08,479 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/hbase.id with ID: 4c97c89f-1b75-47da-ae64-76e422dc6e85 2023-07-18 19:15:08,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:08,502 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:08,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x58646af7 to 127.0.0.1:59566 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:08,526 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38764f19, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:08,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:08,527 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 19:15:08,528 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:08,529 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store-tmp 2023-07-18 19:15:08,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:08,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 19:15:08,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:08,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:08,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 19:15:08,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:08,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:08,545 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:15:08,546 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/WALs/jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:08,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42481%2C1689707708157, suffix=, logDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/WALs/jenkins-hbase4.apache.org,42481,1689707708157, archiveDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/oldWALs, maxLogs=10 2023-07-18 19:15:08,568 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK] 2023-07-18 19:15:08,569 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK] 2023-07-18 19:15:08,569 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK] 2023-07-18 19:15:08,574 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/WALs/jenkins-hbase4.apache.org,42481,1689707708157/jenkins-hbase4.apache.org%2C42481%2C1689707708157.1689707708550 2023-07-18 19:15:08,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK], DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK], DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK]] 2023-07-18 19:15:08,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:08,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:08,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:08,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:08,578 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:08,579 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 19:15:08,580 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 19:15:08,580 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:08,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:08,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:08,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:08,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:08,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11767265760, jitterRate=0.09591202437877655}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:08,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:15:08,594 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 19:15:08,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 19:15:08,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 19:15:08,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 19:15:08,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 19:15:08,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 19:15:08,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 19:15:08,598 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 19:15:08,599 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 19:15:08,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 19:15:08,600 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 19:15:08,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 19:15:08,609 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:08,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 19:15:08,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 19:15:08,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 19:15:08,614 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:08,614 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:08,614 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:08,614 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:08,614 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:08,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42481,1689707708157, sessionid=0x10179dc01d40000, setting cluster-up flag (Was=false) 2023-07-18 19:15:08,621 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:08,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 19:15:08,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:08,635 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:08,639 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 19:15:08,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:08,641 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.hbase-snapshot/.tmp 2023-07-18 19:15:08,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 19:15:08,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 19:15:08,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 19:15:08,648 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:08,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 19:15:08,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-18 19:15:08,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 19:15:08,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 19:15:08,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 19:15:08,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 19:15:08,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 19:15:08,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:15:08,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:15:08,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:15:08,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:15:08,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 19:15:08,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:08,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689707738677 2023-07-18 19:15:08,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 19:15:08,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 19:15:08,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 19:15:08,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 19:15:08,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 19:15:08,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 19:15:08,677 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 19:15:08,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,678 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 19:15:08,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 19:15:08,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 19:15:08,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 19:15:08,680 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:08,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 19:15:08,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 19:15:08,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707708686,5,FailOnTimeoutGroup] 2023-07-18 19:15:08,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707708691,5,FailOnTimeoutGroup] 2023-07-18 19:15:08,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 19:15:08,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,709 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(951): ClusterId : 4c97c89f-1b75-47da-ae64-76e422dc6e85 2023-07-18 19:15:08,709 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(951): ClusterId : 4c97c89f-1b75-47da-ae64-76e422dc6e85 2023-07-18 19:15:08,710 DEBUG [RS:2;jenkins-hbase4:37007] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:15:08,709 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(951): ClusterId : 4c97c89f-1b75-47da-ae64-76e422dc6e85 2023-07-18 19:15:08,712 DEBUG [RS:1;jenkins-hbase4:39351] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:15:08,712 DEBUG [RS:0;jenkins-hbase4:34883] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:15:08,717 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:08,717 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:08,717 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962 2023-07-18 19:15:08,719 DEBUG [RS:2;jenkins-hbase4:37007] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:15:08,719 DEBUG [RS:2;jenkins-hbase4:37007] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:15:08,719 DEBUG [RS:0;jenkins-hbase4:34883] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:15:08,719 DEBUG [RS:0;jenkins-hbase4:34883] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:15:08,721 DEBUG [RS:2;jenkins-hbase4:37007] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:15:08,722 DEBUG [RS:1;jenkins-hbase4:39351] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:15:08,722 DEBUG [RS:1;jenkins-hbase4:39351] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:15:08,723 DEBUG [RS:0;jenkins-hbase4:34883] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:15:08,724 DEBUG [RS:2;jenkins-hbase4:37007] zookeeper.ReadOnlyZKClient(139): Connect 0x07ac6dbc to 127.0.0.1:59566 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:08,725 DEBUG [RS:0;jenkins-hbase4:34883] zookeeper.ReadOnlyZKClient(139): Connect 0x57fcee06 to 127.0.0.1:59566 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:08,728 DEBUG [RS:1;jenkins-hbase4:39351] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:15:08,733 DEBUG [RS:1;jenkins-hbase4:39351] zookeeper.ReadOnlyZKClient(139): Connect 0x6061f652 to 127.0.0.1:59566 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:08,760 DEBUG [RS:1;jenkins-hbase4:39351] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ea6042e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:08,760 DEBUG [RS:1;jenkins-hbase4:39351] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a999d22, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:08,763 DEBUG [RS:2;jenkins-hbase4:37007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d5406cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:08,764 DEBUG [RS:2;jenkins-hbase4:37007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@48569902, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:08,773 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:39351 2023-07-18 19:15:08,773 INFO [RS:1;jenkins-hbase4:39351] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:15:08,773 INFO [RS:1;jenkins-hbase4:39351] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:15:08,773 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:15:08,775 DEBUG [RS:0;jenkins-hbase4:34883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@593b0a89, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:08,775 DEBUG [RS:0;jenkins-hbase4:34883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d048576, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:08,777 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42481,1689707708157 with isa=jenkins-hbase4.apache.org/172.31.14.131:39351, startcode=1689707708299 2023-07-18 19:15:08,777 DEBUG [RS:1;jenkins-hbase4:39351] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:15:08,779 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35431, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:15:08,782 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42481] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:08,782 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:08,785 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 19:15:08,791 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:37007 2023-07-18 19:15:08,791 INFO [RS:2;jenkins-hbase4:37007] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:15:08,791 INFO [RS:2;jenkins-hbase4:37007] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:15:08,791 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:15:08,791 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962 2023-07-18 19:15:08,791 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37601 2023-07-18 19:15:08,792 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37539 2023-07-18 19:15:08,792 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42481,1689707708157 with isa=jenkins-hbase4.apache.org/172.31.14.131:37007, startcode=1689707708360 2023-07-18 19:15:08,792 DEBUG [RS:2;jenkins-hbase4:37007] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:15:08,792 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34883 2023-07-18 19:15:08,792 INFO [RS:0;jenkins-hbase4:34883] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:15:08,793 INFO [RS:0;jenkins-hbase4:34883] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:15:08,793 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:15:08,793 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:08,794 DEBUG [RS:1;jenkins-hbase4:39351] zookeeper.ZKUtil(162): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:08,795 WARN [RS:1;jenkins-hbase4:39351] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:08,795 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42481,1689707708157 with isa=jenkins-hbase4.apache.org/172.31.14.131:34883, startcode=1689707708244 2023-07-18 19:15:08,795 INFO [RS:1;jenkins-hbase4:39351] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:08,795 DEBUG [RS:0;jenkins-hbase4:34883] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:15:08,795 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:08,795 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39351,1689707708299] 2023-07-18 19:15:08,796 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:08,797 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36169, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:15:08,797 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54095, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:15:08,797 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42481] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:08,797 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:08,797 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 19:15:08,797 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42481] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:08,797 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:08,797 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 19:15:08,797 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962 2023-07-18 19:15:08,797 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37601 2023-07-18 19:15:08,798 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37539 2023-07-18 19:15:08,798 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962 2023-07-18 19:15:08,798 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37601 2023-07-18 19:15:08,798 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37539 2023-07-18 19:15:08,800 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 19:15:08,803 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/info 2023-07-18 19:15:08,803 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 19:15:08,804 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:08,804 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:08,804 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 19:15:08,805 DEBUG [RS:0;jenkins-hbase4:34883] zookeeper.ZKUtil(162): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:08,805 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34883,1689707708244] 2023-07-18 19:15:08,805 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37007,1689707708360] 2023-07-18 19:15:08,805 WARN [RS:0;jenkins-hbase4:34883] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:08,805 INFO [RS:0;jenkins-hbase4:34883] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:08,805 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:08,806 DEBUG [RS:1;jenkins-hbase4:39351] zookeeper.ZKUtil(162): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:08,806 DEBUG [RS:2;jenkins-hbase4:37007] zookeeper.ZKUtil(162): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:08,806 WARN [RS:2;jenkins-hbase4:37007] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:08,806 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:15:08,806 DEBUG [RS:1;jenkins-hbase4:39351] zookeeper.ZKUtil(162): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:08,806 INFO [RS:2;jenkins-hbase4:37007] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:08,806 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:08,806 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 19:15:08,806 DEBUG [RS:1;jenkins-hbase4:39351] zookeeper.ZKUtil(162): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:08,809 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:08,809 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 19:15:08,810 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:15:08,811 INFO [RS:1;jenkins-hbase4:39351] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:15:08,812 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/table 2023-07-18 19:15:08,813 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 19:15:08,813 DEBUG [RS:2;jenkins-hbase4:37007] zookeeper.ZKUtil(162): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:08,814 DEBUG [RS:2;jenkins-hbase4:37007] zookeeper.ZKUtil(162): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:08,814 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:08,814 DEBUG [RS:2;jenkins-hbase4:37007] zookeeper.ZKUtil(162): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:08,815 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:15:08,815 INFO [RS:2;jenkins-hbase4:37007] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:15:08,816 DEBUG [RS:0;jenkins-hbase4:34883] zookeeper.ZKUtil(162): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:08,816 DEBUG [RS:0;jenkins-hbase4:34883] zookeeper.ZKUtil(162): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:08,816 DEBUG [RS:0;jenkins-hbase4:34883] zookeeper.ZKUtil(162): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:08,817 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:15:08,817 INFO [RS:0;jenkins-hbase4:34883] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:15:08,819 INFO [RS:1;jenkins-hbase4:39351] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:15:08,819 INFO [RS:0;jenkins-hbase4:34883] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:15:08,822 INFO [RS:2;jenkins-hbase4:37007] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:15:08,823 INFO [RS:0;jenkins-hbase4:34883] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:15:08,823 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,826 INFO [RS:1;jenkins-hbase4:39351] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:15:08,826 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:15:08,826 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,826 INFO [RS:2;jenkins-hbase4:37007] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:15:08,826 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,827 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:15:08,828 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:15:08,829 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740 2023-07-18 19:15:08,830 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,830 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,830 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740 2023-07-18 19:15:08,831 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,831 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,831 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:08,832 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:08,831 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:08,832 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:0;jenkins-hbase4:34883] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:2;jenkins-hbase4:37007] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,832 DEBUG [RS:1;jenkins-hbase4:39351] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:08,835 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 19:15:08,836 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 19:15:08,838 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,839 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,839 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,839 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,845 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,845 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:08,845 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,845 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,846 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,846 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,846 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,846 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,846 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,846 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11613116640, jitterRate=0.08155576884746552}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 19:15:08,847 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 19:15:08,847 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 19:15:08,847 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 19:15:08,847 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 19:15:08,847 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 19:15:08,847 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 19:15:08,847 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 19:15:08,847 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 19:15:08,848 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 19:15:08,848 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 19:15:08,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 19:15:08,850 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 19:15:08,855 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 19:15:08,859 INFO [RS:1;jenkins-hbase4:39351] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:15:08,860 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39351,1689707708299-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,862 INFO [RS:0;jenkins-hbase4:34883] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:15:08,862 INFO [RS:2;jenkins-hbase4:37007] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:15:08,862 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34883,1689707708244-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,862 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37007,1689707708360-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,873 INFO [RS:2;jenkins-hbase4:37007] regionserver.Replication(203): jenkins-hbase4.apache.org,37007,1689707708360 started 2023-07-18 19:15:08,873 INFO [RS:0;jenkins-hbase4:34883] regionserver.Replication(203): jenkins-hbase4.apache.org,34883,1689707708244 started 2023-07-18 19:15:08,873 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37007,1689707708360, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37007, sessionid=0x10179dc01d40003 2023-07-18 19:15:08,873 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34883,1689707708244, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34883, sessionid=0x10179dc01d40001 2023-07-18 19:15:08,873 DEBUG [RS:2;jenkins-hbase4:37007] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:15:08,875 DEBUG [RS:0;jenkins-hbase4:34883] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:15:08,875 DEBUG [RS:0;jenkins-hbase4:34883] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:08,876 DEBUG [RS:0;jenkins-hbase4:34883] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34883,1689707708244' 2023-07-18 19:15:08,876 DEBUG [RS:0;jenkins-hbase4:34883] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:15:08,875 DEBUG [RS:2;jenkins-hbase4:37007] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:08,876 DEBUG [RS:2;jenkins-hbase4:37007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37007,1689707708360' 2023-07-18 19:15:08,876 DEBUG [RS:2;jenkins-hbase4:37007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:15:08,876 DEBUG [RS:0;jenkins-hbase4:34883] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:15:08,876 DEBUG [RS:2;jenkins-hbase4:37007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:15:08,876 INFO [RS:1;jenkins-hbase4:39351] regionserver.Replication(203): jenkins-hbase4.apache.org,39351,1689707708299 started 2023-07-18 19:15:08,876 DEBUG [RS:0;jenkins-hbase4:34883] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:15:08,876 DEBUG [RS:0;jenkins-hbase4:34883] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:15:08,876 DEBUG [RS:2;jenkins-hbase4:37007] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:15:08,877 DEBUG [RS:2;jenkins-hbase4:37007] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:15:08,877 DEBUG [RS:2;jenkins-hbase4:37007] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:08,877 DEBUG [RS:2;jenkins-hbase4:37007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37007,1689707708360' 2023-07-18 19:15:08,877 DEBUG [RS:2;jenkins-hbase4:37007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:15:08,876 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39351,1689707708299, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39351, sessionid=0x10179dc01d40002 2023-07-18 19:15:08,877 DEBUG [RS:1;jenkins-hbase4:39351] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:15:08,876 DEBUG [RS:0;jenkins-hbase4:34883] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:08,877 DEBUG [RS:0;jenkins-hbase4:34883] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34883,1689707708244' 2023-07-18 19:15:08,877 DEBUG [RS:0;jenkins-hbase4:34883] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:15:08,877 DEBUG [RS:1;jenkins-hbase4:39351] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:08,877 DEBUG [RS:1;jenkins-hbase4:39351] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39351,1689707708299' 2023-07-18 19:15:08,877 DEBUG [RS:1;jenkins-hbase4:39351] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:15:08,877 DEBUG [RS:2;jenkins-hbase4:37007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:15:08,877 DEBUG [RS:0;jenkins-hbase4:34883] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:15:08,877 DEBUG [RS:1;jenkins-hbase4:39351] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:15:08,877 DEBUG [RS:2;jenkins-hbase4:37007] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:15:08,878 INFO [RS:2;jenkins-hbase4:37007] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 19:15:08,878 DEBUG [RS:0;jenkins-hbase4:34883] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:15:08,878 INFO [RS:0;jenkins-hbase4:34883] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 19:15:08,878 DEBUG [RS:1;jenkins-hbase4:39351] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:15:08,878 DEBUG [RS:1;jenkins-hbase4:39351] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:15:08,878 DEBUG [RS:1;jenkins-hbase4:39351] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:08,878 DEBUG [RS:1;jenkins-hbase4:39351] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39351,1689707708299' 2023-07-18 19:15:08,878 DEBUG [RS:1;jenkins-hbase4:39351] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:15:08,878 DEBUG [RS:1;jenkins-hbase4:39351] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:15:08,879 DEBUG [RS:1;jenkins-hbase4:39351] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:15:08,879 INFO [RS:1;jenkins-hbase4:39351] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 19:15:08,880 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,880 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,880 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,880 DEBUG [RS:0;jenkins-hbase4:34883] zookeeper.ZKUtil(398): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 19:15:08,880 DEBUG [RS:1;jenkins-hbase4:39351] zookeeper.ZKUtil(398): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 19:15:08,880 INFO [RS:0;jenkins-hbase4:34883] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 19:15:08,880 DEBUG [RS:2;jenkins-hbase4:37007] zookeeper.ZKUtil(398): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 19:15:08,880 INFO [RS:1;jenkins-hbase4:39351] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 19:15:08,881 INFO [RS:2;jenkins-hbase4:37007] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 19:15:08,881 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,881 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,881 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,881 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,881 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,881 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:08,985 INFO [RS:2;jenkins-hbase4:37007] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37007%2C1689707708360, suffix=, logDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,37007,1689707708360, archiveDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/oldWALs, maxLogs=32 2023-07-18 19:15:08,985 INFO [RS:1;jenkins-hbase4:39351] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39351%2C1689707708299, suffix=, logDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,39351,1689707708299, archiveDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/oldWALs, maxLogs=32 2023-07-18 19:15:08,985 INFO [RS:0;jenkins-hbase4:34883] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34883%2C1689707708244, suffix=, logDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,34883,1689707708244, archiveDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/oldWALs, maxLogs=32 2023-07-18 19:15:09,005 DEBUG [jenkins-hbase4:42481] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 19:15:09,005 DEBUG [jenkins-hbase4:42481] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:09,005 DEBUG [jenkins-hbase4:42481] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:09,005 DEBUG [jenkins-hbase4:42481] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:09,005 DEBUG [jenkins-hbase4:42481] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:09,005 DEBUG [jenkins-hbase4:42481] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:09,007 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37007,1689707708360, state=OPENING 2023-07-18 19:15:09,009 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 19:15:09,011 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK] 2023-07-18 19:15:09,015 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK] 2023-07-18 19:15:09,016 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:09,017 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37007,1689707708360}] 2023-07-18 19:15:09,018 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK] 2023-07-18 19:15:09,019 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:15:09,021 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK] 2023-07-18 19:15:09,021 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK] 2023-07-18 19:15:09,021 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK] 2023-07-18 19:15:09,023 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK] 2023-07-18 19:15:09,024 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK] 2023-07-18 19:15:09,024 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK] 2023-07-18 19:15:09,025 INFO [RS:2;jenkins-hbase4:37007] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,37007,1689707708360/jenkins-hbase4.apache.org%2C37007%2C1689707708360.1689707708990 2023-07-18 19:15:09,030 INFO [RS:1;jenkins-hbase4:39351] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,39351,1689707708299/jenkins-hbase4.apache.org%2C39351%2C1689707708299.1689707708992 2023-07-18 19:15:09,034 DEBUG [RS:2;jenkins-hbase4:37007] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK], DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK], DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK]] 2023-07-18 19:15:09,034 DEBUG [RS:1;jenkins-hbase4:39351] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK], DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK], DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK]] 2023-07-18 19:15:09,034 INFO [RS:0;jenkins-hbase4:34883] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,34883,1689707708244/jenkins-hbase4.apache.org%2C34883%2C1689707708244.1689707708992 2023-07-18 19:15:09,034 DEBUG [RS:0;jenkins-hbase4:34883] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK], DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK], DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK]] 2023-07-18 19:15:09,183 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:09,183 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:15:09,185 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39204, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:15:09,189 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 19:15:09,189 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:09,191 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37007%2C1689707708360.meta, suffix=.meta, logDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,37007,1689707708360, archiveDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/oldWALs, maxLogs=32 2023-07-18 19:15:09,208 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK] 2023-07-18 19:15:09,209 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK] 2023-07-18 19:15:09,208 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK] 2023-07-18 19:15:09,211 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,37007,1689707708360/jenkins-hbase4.apache.org%2C37007%2C1689707708360.meta.1689707709191.meta 2023-07-18 19:15:09,211 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39355,DS-50b7b60a-d20c-420f-8bd7-bd5767d4ec99,DISK], DatanodeInfoWithStorage[127.0.0.1:43621,DS-67736212-aa37-485a-9aee-6fed781fe9e1,DISK], DatanodeInfoWithStorage[127.0.0.1:38887,DS-bd2a2a59-21ef-4725-b70d-16ed191b0706,DISK]] 2023-07-18 19:15:09,211 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:09,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:15:09,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 19:15:09,212 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 19:15:09,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 19:15:09,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:09,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 19:15:09,212 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 19:15:09,214 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 19:15:09,214 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/info 2023-07-18 19:15:09,214 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/info 2023-07-18 19:15:09,215 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 19:15:09,215 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:09,215 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 19:15:09,216 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:15:09,216 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:15:09,217 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 19:15:09,217 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:09,217 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 19:15:09,218 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/table 2023-07-18 19:15:09,218 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/table 2023-07-18 19:15:09,218 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 19:15:09,219 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:09,220 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740 2023-07-18 19:15:09,221 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740 2023-07-18 19:15:09,222 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 19:15:09,224 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 19:15:09,225 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10137664800, jitterRate=-0.05585639178752899}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 19:15:09,225 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 19:15:09,227 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689707709183 2023-07-18 19:15:09,232 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 19:15:09,232 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 19:15:09,233 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37007,1689707708360, state=OPEN 2023-07-18 19:15:09,234 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:15:09,234 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:15:09,236 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 19:15:09,236 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37007,1689707708360 in 218 msec 2023-07-18 19:15:09,238 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 19:15:09,238 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 388 msec 2023-07-18 19:15:09,240 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 590 msec 2023-07-18 19:15:09,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689707709240, completionTime=-1 2023-07-18 19:15:09,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 19:15:09,240 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 19:15:09,247 DEBUG [hconnection-0x4f21aee3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:15:09,249 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39218, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:15:09,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 19:15:09,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689707769250 2023-07-18 19:15:09,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689707829251 2023-07-18 19:15:09,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 10 msec 2023-07-18 19:15:09,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42481,1689707708157-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:09,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42481,1689707708157-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:09,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42481,1689707708157-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:09,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42481, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:09,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:09,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 19:15:09,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:09,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 19:15:09,263 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 19:15:09,264 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42481,1689707708157] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:09,267 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:09,267 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42481,1689707708157] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 19:15:09,270 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:09,270 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:09,270 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:09,271 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,272 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a empty. 2023-07-18 19:15:09,272 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,273 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,273 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 19:15:09,273 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0 empty. 2023-07-18 19:15:09,273 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,273 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 19:15:09,298 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:09,300 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4c4de3646a3c1c3c6ecde78e36f1aea0, NAME => 'hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp 2023-07-18 19:15:09,304 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:09,307 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2ff39883d86ed5a2f624c02abe06614a, NAME => 'hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp 2023-07-18 19:15:09,323 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:09,323 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 4c4de3646a3c1c3c6ecde78e36f1aea0, disabling compactions & flushes 2023-07-18 19:15:09,323 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:09,323 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:09,323 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. after waiting 0 ms 2023-07-18 19:15:09,324 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:09,324 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:09,324 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 4c4de3646a3c1c3c6ecde78e36f1aea0: 2023-07-18 19:15:09,327 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:09,328 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707709328"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707709328"}]},"ts":"1689707709328"} 2023-07-18 19:15:09,329 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:09,330 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2ff39883d86ed5a2f624c02abe06614a, disabling compactions & flushes 2023-07-18 19:15:09,330 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:09,330 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:09,330 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. after waiting 0 ms 2023-07-18 19:15:09,330 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:09,330 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:09,330 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2ff39883d86ed5a2f624c02abe06614a: 2023-07-18 19:15:09,333 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:15:09,333 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:09,333 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:09,334 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707709334"}]},"ts":"1689707709334"} 2023-07-18 19:15:09,335 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707709335"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707709335"}]},"ts":"1689707709335"} 2023-07-18 19:15:09,335 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 19:15:09,337 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:15:09,338 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:09,338 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707709338"}]},"ts":"1689707709338"} 2023-07-18 19:15:09,340 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:09,340 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:09,340 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:09,341 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:09,341 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:09,341 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=4c4de3646a3c1c3c6ecde78e36f1aea0, ASSIGN}] 2023-07-18 19:15:09,342 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=4c4de3646a3c1c3c6ecde78e36f1aea0, ASSIGN 2023-07-18 19:15:09,343 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=4c4de3646a3c1c3c6ecde78e36f1aea0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37007,1689707708360; forceNewPlan=false, retain=false 2023-07-18 19:15:09,343 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 19:15:09,346 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:09,346 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:09,346 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:09,346 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:09,346 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:09,347 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2ff39883d86ed5a2f624c02abe06614a, ASSIGN}] 2023-07-18 19:15:09,348 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2ff39883d86ed5a2f624c02abe06614a, ASSIGN 2023-07-18 19:15:09,349 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2ff39883d86ed5a2f624c02abe06614a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34883,1689707708244; forceNewPlan=false, retain=false 2023-07-18 19:15:09,349 INFO [jenkins-hbase4:42481] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 19:15:09,351 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4c4de3646a3c1c3c6ecde78e36f1aea0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:09,351 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=2ff39883d86ed5a2f624c02abe06614a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:09,351 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707709351"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707709351"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707709351"}]},"ts":"1689707709351"} 2023-07-18 19:15:09,351 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707709351"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707709351"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707709351"}]},"ts":"1689707709351"} 2023-07-18 19:15:09,353 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 4c4de3646a3c1c3c6ecde78e36f1aea0, server=jenkins-hbase4.apache.org,37007,1689707708360}] 2023-07-18 19:15:09,354 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 2ff39883d86ed5a2f624c02abe06614a, server=jenkins-hbase4.apache.org,34883,1689707708244}] 2023-07-18 19:15:09,507 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:09,507 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:15:09,509 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44514, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:15:09,510 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:09,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c4de3646a3c1c3c6ecde78e36f1aea0, NAME => 'hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:09,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:15:09,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. service=MultiRowMutationService 2023-07-18 19:15:09,511 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 19:15:09,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:09,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,513 INFO [StoreOpener-4c4de3646a3c1c3c6ecde78e36f1aea0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,516 DEBUG [StoreOpener-4c4de3646a3c1c3c6ecde78e36f1aea0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0/m 2023-07-18 19:15:09,516 DEBUG [StoreOpener-4c4de3646a3c1c3c6ecde78e36f1aea0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0/m 2023-07-18 19:15:09,516 INFO [StoreOpener-4c4de3646a3c1c3c6ecde78e36f1aea0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c4de3646a3c1c3c6ecde78e36f1aea0 columnFamilyName m 2023-07-18 19:15:09,516 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:09,517 INFO [StoreOpener-4c4de3646a3c1c3c6ecde78e36f1aea0-1] regionserver.HStore(310): Store=4c4de3646a3c1c3c6ecde78e36f1aea0/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:09,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2ff39883d86ed5a2f624c02abe06614a, NAME => 'hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:09,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:09,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,518 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,519 INFO [StoreOpener-2ff39883d86ed5a2f624c02abe06614a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,520 DEBUG [StoreOpener-2ff39883d86ed5a2f624c02abe06614a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a/info 2023-07-18 19:15:09,520 DEBUG [StoreOpener-2ff39883d86ed5a2f624c02abe06614a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a/info 2023-07-18 19:15:09,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:09,521 INFO [StoreOpener-2ff39883d86ed5a2f624c02abe06614a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2ff39883d86ed5a2f624c02abe06614a columnFamilyName info 2023-07-18 19:15:09,521 INFO [StoreOpener-2ff39883d86ed5a2f624c02abe06614a-1] regionserver.HStore(310): Store=2ff39883d86ed5a2f624c02abe06614a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:09,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,523 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,523 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:09,523 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c4de3646a3c1c3c6ecde78e36f1aea0; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@263dd1ad, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:09,523 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c4de3646a3c1c3c6ecde78e36f1aea0: 2023-07-18 19:15:09,524 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0., pid=8, masterSystemTime=1689707709505 2023-07-18 19:15:09,528 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:09,528 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:09,528 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:09,530 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4c4de3646a3c1c3c6ecde78e36f1aea0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:09,530 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707709530"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707709530"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707709530"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707709530"}]},"ts":"1689707709530"} 2023-07-18 19:15:09,532 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:09,533 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2ff39883d86ed5a2f624c02abe06614a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10268403840, jitterRate=-0.04368036985397339}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:09,533 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2ff39883d86ed5a2f624c02abe06614a: 2023-07-18 19:15:09,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 19:15:09,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 4c4de3646a3c1c3c6ecde78e36f1aea0, server=jenkins-hbase4.apache.org,37007,1689707708360 in 179 msec 2023-07-18 19:15:09,539 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-18 19:15:09,539 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=4c4de3646a3c1c3c6ecde78e36f1aea0, ASSIGN in 196 msec 2023-07-18 19:15:09,540 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a., pid=9, masterSystemTime=1689707709507 2023-07-18 19:15:09,540 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:09,542 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707709540"}]},"ts":"1689707709540"} 2023-07-18 19:15:09,544 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:09,544 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:09,545 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 19:15:09,551 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:09,552 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=2ff39883d86ed5a2f624c02abe06614a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:09,552 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707709551"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707709551"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707709551"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707709551"}]},"ts":"1689707709551"} 2023-07-18 19:15:09,553 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 288 msec 2023-07-18 19:15:09,555 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 19:15:09,555 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 2ff39883d86ed5a2f624c02abe06614a, server=jenkins-hbase4.apache.org,34883,1689707708244 in 200 msec 2023-07-18 19:15:09,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-18 19:15:09,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2ff39883d86ed5a2f624c02abe06614a, ASSIGN in 208 msec 2023-07-18 19:15:09,557 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:09,557 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707709557"}]},"ts":"1689707709557"} 2023-07-18 19:15:09,559 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 19:15:09,561 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:09,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 19:15:09,562 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 302 msec 2023-07-18 19:15:09,563 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:09,563 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:09,567 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:15:09,570 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44516, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:15:09,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 19:15:09,579 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 19:15:09,579 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 19:15:09,583 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:09,583 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:09,585 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 19:15:09,586 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:09,587 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42481,1689707708157] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 19:15:09,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 15 msec 2023-07-18 19:15:09,599 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 19:15:09,606 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:09,609 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-18 19:15:09,614 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 19:15:09,616 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 19:15:09,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.184sec 2023-07-18 19:15:09,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-18 19:15:09,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:09,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-18 19:15:09,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-18 19:15:09,621 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:09,622 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:09,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-18 19:15:09,624 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,624 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be empty. 2023-07-18 19:15:09,625 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,625 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-18 19:15:09,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-18 19:15:09,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-18 19:15:09,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:09,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:09,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 19:15:09,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 19:15:09,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42481,1689707708157-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 19:15:09,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42481,1689707708157-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 19:15:09,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 19:15:09,643 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:09,644 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => aa64e7d74e70db7acc3fcfc4f751f6be, NAME => 'hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp 2023-07-18 19:15:09,656 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:09,656 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing aa64e7d74e70db7acc3fcfc4f751f6be, disabling compactions & flushes 2023-07-18 19:15:09,656 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:09,657 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:09,657 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. after waiting 0 ms 2023-07-18 19:15:09,657 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:09,657 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:09,657 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for aa64e7d74e70db7acc3fcfc4f751f6be: 2023-07-18 19:15:09,659 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:09,660 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689707709660"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707709660"}]},"ts":"1689707709660"} 2023-07-18 19:15:09,661 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:15:09,662 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:09,662 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707709662"}]},"ts":"1689707709662"} 2023-07-18 19:15:09,663 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-18 19:15:09,667 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:09,667 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:09,667 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:09,667 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:09,667 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:09,667 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=aa64e7d74e70db7acc3fcfc4f751f6be, ASSIGN}] 2023-07-18 19:15:09,668 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=aa64e7d74e70db7acc3fcfc4f751f6be, ASSIGN 2023-07-18 19:15:09,669 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=aa64e7d74e70db7acc3fcfc4f751f6be, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39351,1689707708299; forceNewPlan=false, retain=false 2023-07-18 19:15:09,710 DEBUG [Listener at localhost/45101] zookeeper.ReadOnlyZKClient(139): Connect 0x0900ac95 to 127.0.0.1:59566 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:09,715 DEBUG [Listener at localhost/45101] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fa78d45, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:09,717 DEBUG [hconnection-0x50772d21-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:15:09,719 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39230, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:15:09,720 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:09,721 INFO [Listener at localhost/45101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:09,723 DEBUG [Listener at localhost/45101] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 19:15:09,725 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33376, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 19:15:09,729 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 19:15:09,729 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:09,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 19:15:09,730 DEBUG [Listener at localhost/45101] zookeeper.ReadOnlyZKClient(139): Connect 0x1cd96770 to 127.0.0.1:59566 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:09,735 DEBUG [Listener at localhost/45101] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@752e4fef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:09,735 INFO [Listener at localhost/45101] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59566 2023-07-18 19:15:09,737 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:09,739 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10179dc01d4000a connected 2023-07-18 19:15:09,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-18 19:15:09,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-18 19:15:09,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 19:15:09,755 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:09,758 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-18 19:15:09,819 INFO [jenkins-hbase4:42481] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:15:09,821 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=aa64e7d74e70db7acc3fcfc4f751f6be, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:09,821 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689707709821"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707709821"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707709821"}]},"ts":"1689707709821"} 2023-07-18 19:15:09,823 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure aa64e7d74e70db7acc3fcfc4f751f6be, server=jenkins-hbase4.apache.org,39351,1689707708299}] 2023-07-18 19:15:09,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 19:15:09,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:09,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-18 19:15:09,858 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:09,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-18 19:15:09,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 19:15:09,860 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:09,860 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 19:15:09,862 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:09,863 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:09,864 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/f80d33599cb9560811ad11bc91bad208 empty. 2023-07-18 19:15:09,864 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:09,864 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 19:15:09,876 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:09,877 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => f80d33599cb9560811ad11bc91bad208, NAME => 'np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp 2023-07-18 19:15:09,885 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:09,885 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing f80d33599cb9560811ad11bc91bad208, disabling compactions & flushes 2023-07-18 19:15:09,886 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:09,886 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:09,886 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. after waiting 0 ms 2023-07-18 19:15:09,886 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:09,886 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:09,886 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for f80d33599cb9560811ad11bc91bad208: 2023-07-18 19:15:09,888 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:09,889 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707709889"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707709889"}]},"ts":"1689707709889"} 2023-07-18 19:15:09,890 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:15:09,891 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:09,891 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707709891"}]},"ts":"1689707709891"} 2023-07-18 19:15:09,892 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-18 19:15:09,895 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:09,895 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:09,895 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:09,895 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:09,895 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:09,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=f80d33599cb9560811ad11bc91bad208, ASSIGN}] 2023-07-18 19:15:09,896 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=f80d33599cb9560811ad11bc91bad208, ASSIGN 2023-07-18 19:15:09,897 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=f80d33599cb9560811ad11bc91bad208, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37007,1689707708360; forceNewPlan=false, retain=false 2023-07-18 19:15:09,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 19:15:09,976 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:09,976 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:15:09,977 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39206, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:15:09,982 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:09,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aa64e7d74e70db7acc3fcfc4f751f6be, NAME => 'hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:09,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:09,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,982 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,983 INFO [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,985 DEBUG [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be/q 2023-07-18 19:15:09,985 DEBUG [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be/q 2023-07-18 19:15:09,985 INFO [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa64e7d74e70db7acc3fcfc4f751f6be columnFamilyName q 2023-07-18 19:15:09,986 INFO [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] regionserver.HStore(310): Store=aa64e7d74e70db7acc3fcfc4f751f6be/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:09,986 INFO [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,987 DEBUG [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be/u 2023-07-18 19:15:09,987 DEBUG [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be/u 2023-07-18 19:15:09,987 INFO [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aa64e7d74e70db7acc3fcfc4f751f6be columnFamilyName u 2023-07-18 19:15:09,988 INFO [StoreOpener-aa64e7d74e70db7acc3fcfc4f751f6be-1] regionserver.HStore(310): Store=aa64e7d74e70db7acc3fcfc4f751f6be/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:09,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,990 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-18 19:15:09,991 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:09,994 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:09,995 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aa64e7d74e70db7acc3fcfc4f751f6be; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10881428800, jitterRate=0.013412028551101685}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-18 19:15:09,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aa64e7d74e70db7acc3fcfc4f751f6be: 2023-07-18 19:15:09,996 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be., pid=15, masterSystemTime=1689707709975 2023-07-18 19:15:09,999 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:09,999 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:09,999 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=aa64e7d74e70db7acc3fcfc4f751f6be, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:10,000 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689707709999"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707709999"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707709999"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707709999"}]},"ts":"1689707709999"} 2023-07-18 19:15:10,002 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-18 19:15:10,002 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure aa64e7d74e70db7acc3fcfc4f751f6be, server=jenkins-hbase4.apache.org,39351,1689707708299 in 178 msec 2023-07-18 19:15:10,003 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 19:15:10,003 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=aa64e7d74e70db7acc3fcfc4f751f6be, ASSIGN in 335 msec 2023-07-18 19:15:10,004 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:10,004 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707710004"}]},"ts":"1689707710004"} 2023-07-18 19:15:10,005 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-18 19:15:10,007 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:10,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 388 msec 2023-07-18 19:15:10,047 INFO [jenkins-hbase4:42481] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:15:10,048 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=f80d33599cb9560811ad11bc91bad208, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:10,049 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707710048"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707710048"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707710048"}]},"ts":"1689707710048"} 2023-07-18 19:15:10,050 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure f80d33599cb9560811ad11bc91bad208, server=jenkins-hbase4.apache.org,37007,1689707708360}] 2023-07-18 19:15:10,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 19:15:10,205 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:10,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f80d33599cb9560811ad11bc91bad208, NAME => 'np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:10,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:10,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,208 INFO [StoreOpener-f80d33599cb9560811ad11bc91bad208-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,209 DEBUG [StoreOpener-f80d33599cb9560811ad11bc91bad208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/np1/table1/f80d33599cb9560811ad11bc91bad208/fam1 2023-07-18 19:15:10,209 DEBUG [StoreOpener-f80d33599cb9560811ad11bc91bad208-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/np1/table1/f80d33599cb9560811ad11bc91bad208/fam1 2023-07-18 19:15:10,210 INFO [StoreOpener-f80d33599cb9560811ad11bc91bad208-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f80d33599cb9560811ad11bc91bad208 columnFamilyName fam1 2023-07-18 19:15:10,210 INFO [StoreOpener-f80d33599cb9560811ad11bc91bad208-1] regionserver.HStore(310): Store=f80d33599cb9560811ad11bc91bad208/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:10,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/np1/table1/f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/np1/table1/f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/np1/table1/f80d33599cb9560811ad11bc91bad208/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:10,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f80d33599cb9560811ad11bc91bad208; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11280740640, jitterRate=0.050600841641426086}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:10,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f80d33599cb9560811ad11bc91bad208: 2023-07-18 19:15:10,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208., pid=18, masterSystemTime=1689707710201 2023-07-18 19:15:10,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:10,220 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:10,221 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=f80d33599cb9560811ad11bc91bad208, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:10,221 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707710220"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707710220"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707710220"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707710220"}]},"ts":"1689707710220"} 2023-07-18 19:15:10,223 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 19:15:10,223 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure f80d33599cb9560811ad11bc91bad208, server=jenkins-hbase4.apache.org,37007,1689707708360 in 172 msec 2023-07-18 19:15:10,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 19:15:10,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=f80d33599cb9560811ad11bc91bad208, ASSIGN in 327 msec 2023-07-18 19:15:10,226 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:10,226 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707710226"}]},"ts":"1689707710226"} 2023-07-18 19:15:10,227 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-18 19:15:10,230 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:10,232 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 376 msec 2023-07-18 19:15:10,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 19:15:10,462 INFO [Listener at localhost/45101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-18 19:15:10,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:10,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-18 19:15:10,467 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:10,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-18 19:15:10,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 19:15:10,487 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:15:10,488 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47444, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:15:10,491 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=26 msec 2023-07-18 19:15:10,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 19:15:10,571 INFO [Listener at localhost/45101] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-18 19:15:10,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:10,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:10,574 INFO [Listener at localhost/45101] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-18 19:15:10,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-18 19:15:10,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-18 19:15:10,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 19:15:10,577 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707710577"}]},"ts":"1689707710577"} 2023-07-18 19:15:10,579 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-18 19:15:10,580 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-18 19:15:10,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=f80d33599cb9560811ad11bc91bad208, UNASSIGN}] 2023-07-18 19:15:10,582 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=f80d33599cb9560811ad11bc91bad208, UNASSIGN 2023-07-18 19:15:10,583 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=f80d33599cb9560811ad11bc91bad208, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:10,583 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707710583"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707710583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707710583"}]},"ts":"1689707710583"} 2023-07-18 19:15:10,585 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure f80d33599cb9560811ad11bc91bad208, server=jenkins-hbase4.apache.org,37007,1689707708360}] 2023-07-18 19:15:10,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 19:15:10,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f80d33599cb9560811ad11bc91bad208, disabling compactions & flushes 2023-07-18 19:15:10,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:10,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:10,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. after waiting 0 ms 2023-07-18 19:15:10,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:10,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/np1/table1/f80d33599cb9560811ad11bc91bad208/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:10,744 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208. 2023-07-18 19:15:10,744 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f80d33599cb9560811ad11bc91bad208: 2023-07-18 19:15:10,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,748 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=f80d33599cb9560811ad11bc91bad208, regionState=CLOSED 2023-07-18 19:15:10,748 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707710748"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707710748"}]},"ts":"1689707710748"} 2023-07-18 19:15:10,751 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-18 19:15:10,751 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure f80d33599cb9560811ad11bc91bad208, server=jenkins-hbase4.apache.org,37007,1689707708360 in 165 msec 2023-07-18 19:15:10,752 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-18 19:15:10,752 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=f80d33599cb9560811ad11bc91bad208, UNASSIGN in 170 msec 2023-07-18 19:15:10,753 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707710753"}]},"ts":"1689707710753"} 2023-07-18 19:15:10,754 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-18 19:15:10,755 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-18 19:15:10,757 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 182 msec 2023-07-18 19:15:10,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 19:15:10,879 INFO [Listener at localhost/45101] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-18 19:15:10,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-18 19:15:10,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-18 19:15:10,884 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 19:15:10,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-18 19:15:10,885 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 19:15:10,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:10,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 19:15:10,889 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 19:15:10,891 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/f80d33599cb9560811ad11bc91bad208/fam1, FileablePath, hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/f80d33599cb9560811ad11bc91bad208/recovered.edits] 2023-07-18 19:15:10,895 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/f80d33599cb9560811ad11bc91bad208/recovered.edits/4.seqid to hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/archive/data/np1/table1/f80d33599cb9560811ad11bc91bad208/recovered.edits/4.seqid 2023-07-18 19:15:10,896 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/.tmp/data/np1/table1/f80d33599cb9560811ad11bc91bad208 2023-07-18 19:15:10,896 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 19:15:10,898 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 19:15:10,899 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-18 19:15:10,901 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-18 19:15:10,902 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 19:15:10,903 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-18 19:15:10,903 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707710903"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:10,904 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 19:15:10,904 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f80d33599cb9560811ad11bc91bad208, NAME => 'np1:table1,,1689707709854.f80d33599cb9560811ad11bc91bad208.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 19:15:10,904 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-18 19:15:10,904 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689707710904"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:10,905 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-18 19:15:10,911 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 19:15:10,912 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 31 msec 2023-07-18 19:15:10,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 19:15:10,992 INFO [Listener at localhost/45101] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-18 19:15:10,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-18 19:15:11,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-18 19:15:11,008 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 19:15:11,011 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 19:15:11,014 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 19:15:11,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 19:15:11,015 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-18 19:15:11,015 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:11,016 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 19:15:11,020 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 19:15:11,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 24 msec 2023-07-18 19:15:11,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42481] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 19:15:11,116 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 19:15:11,117 INFO [Listener at localhost/45101] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 19:15:11,117 DEBUG [Listener at localhost/45101] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0900ac95 to 127.0.0.1:59566 2023-07-18 19:15:11,117 DEBUG [Listener at localhost/45101] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,117 DEBUG [Listener at localhost/45101] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 19:15:11,117 DEBUG [Listener at localhost/45101] util.JVMClusterUtil(257): Found active master hash=1936016383, stopped=false 2023-07-18 19:15:11,117 DEBUG [Listener at localhost/45101] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 19:15:11,117 DEBUG [Listener at localhost/45101] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 19:15:11,118 DEBUG [Listener at localhost/45101] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-18 19:15:11,118 INFO [Listener at localhost/45101] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:11,119 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:11,119 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:11,119 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:11,119 INFO [Listener at localhost/45101] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 19:15:11,119 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:11,119 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:11,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:11,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:11,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:11,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:11,123 DEBUG [Listener at localhost/45101] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x58646af7 to 127.0.0.1:59566 2023-07-18 19:15:11,123 DEBUG [Listener at localhost/45101] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,123 INFO [Listener at localhost/45101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34883,1689707708244' ***** 2023-07-18 19:15:11,123 INFO [Listener at localhost/45101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:11,123 INFO [Listener at localhost/45101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39351,1689707708299' ***** 2023-07-18 19:15:11,123 INFO [Listener at localhost/45101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:11,123 INFO [Listener at localhost/45101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37007,1689707708360' ***** 2023-07-18 19:15:11,123 INFO [Listener at localhost/45101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:11,123 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:11,123 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:11,123 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:11,137 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:11,140 INFO [RS:0;jenkins-hbase4:34883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@24a6ace2{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:11,140 INFO [RS:2;jenkins-hbase4:37007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@371a9a21{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:11,140 INFO [RS:1;jenkins-hbase4:39351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@40db2b1e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:11,141 INFO [RS:0;jenkins-hbase4:34883] server.AbstractConnector(383): Stopped ServerConnector@770aaad2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:11,141 INFO [RS:1;jenkins-hbase4:39351] server.AbstractConnector(383): Stopped ServerConnector@47efc0a5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:11,141 INFO [RS:2;jenkins-hbase4:37007] server.AbstractConnector(383): Stopped ServerConnector@4411c3c2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:11,141 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:11,141 INFO [RS:2;jenkins-hbase4:37007] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:11,141 INFO [RS:1;jenkins-hbase4:39351] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:11,141 INFO [RS:0;jenkins-hbase4:34883] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:11,145 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:11,145 INFO [RS:1;jenkins-hbase4:39351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d317869{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:11,146 INFO [RS:0;jenkins-hbase4:34883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d4490e8{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:11,146 INFO [RS:1;jenkins-hbase4:39351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@120830ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:11,143 INFO [RS:2;jenkins-hbase4:37007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3547860d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:11,146 INFO [RS:2;jenkins-hbase4:37007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d8d5276{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:11,146 INFO [RS:0;jenkins-hbase4:34883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@981894e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:11,147 INFO [RS:1;jenkins-hbase4:39351] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:11,147 INFO [RS:1;jenkins-hbase4:39351] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:11,147 INFO [RS:1;jenkins-hbase4:39351] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:11,147 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:11,147 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(3305): Received CLOSE for aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:11,150 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:11,151 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:11,154 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:11,154 DEBUG [RS:1;jenkins-hbase4:39351] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6061f652 to 127.0.0.1:59566 2023-07-18 19:15:11,155 DEBUG [RS:1;jenkins-hbase4:39351] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,155 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 19:15:11,155 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1478): Online Regions={aa64e7d74e70db7acc3fcfc4f751f6be=hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be.} 2023-07-18 19:15:11,155 DEBUG [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1504): Waiting on aa64e7d74e70db7acc3fcfc4f751f6be 2023-07-18 19:15:11,156 INFO [RS:0;jenkins-hbase4:34883] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:11,156 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aa64e7d74e70db7acc3fcfc4f751f6be, disabling compactions & flushes 2023-07-18 19:15:11,156 INFO [RS:0;jenkins-hbase4:34883] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:11,156 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:11,156 INFO [RS:0;jenkins-hbase4:34883] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:11,156 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:11,156 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. after waiting 0 ms 2023-07-18 19:15:11,156 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(3305): Received CLOSE for 2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:11,156 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:11,161 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:11,161 DEBUG [RS:0;jenkins-hbase4:34883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x57fcee06 to 127.0.0.1:59566 2023-07-18 19:15:11,161 DEBUG [RS:0;jenkins-hbase4:34883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,161 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 19:15:11,161 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1478): Online Regions={2ff39883d86ed5a2f624c02abe06614a=hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a.} 2023-07-18 19:15:11,161 DEBUG [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1504): Waiting on 2ff39883d86ed5a2f624c02abe06614a 2023-07-18 19:15:11,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2ff39883d86ed5a2f624c02abe06614a, disabling compactions & flushes 2023-07-18 19:15:11,170 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:11,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:11,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. after waiting 0 ms 2023-07-18 19:15:11,170 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:11,170 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2ff39883d86ed5a2f624c02abe06614a 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-18 19:15:11,170 INFO [RS:2;jenkins-hbase4:37007] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:11,170 INFO [RS:2;jenkins-hbase4:37007] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:11,170 INFO [RS:2;jenkins-hbase4:37007] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:11,171 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(3305): Received CLOSE for 4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:11,171 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:11,171 DEBUG [RS:2;jenkins-hbase4:37007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x07ac6dbc to 127.0.0.1:59566 2023-07-18 19:15:11,171 DEBUG [RS:2;jenkins-hbase4:37007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c4de3646a3c1c3c6ecde78e36f1aea0, disabling compactions & flushes 2023-07-18 19:15:11,171 INFO [RS:2;jenkins-hbase4:37007] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:11,171 INFO [RS:2;jenkins-hbase4:37007] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:11,171 INFO [RS:2;jenkins-hbase4:37007] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:11,171 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 19:15:11,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:11,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:11,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. after waiting 0 ms 2023-07-18 19:15:11,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:11,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 4c4de3646a3c1c3c6ecde78e36f1aea0 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-18 19:15:11,174 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 19:15:11,175 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 4c4de3646a3c1c3c6ecde78e36f1aea0=hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0.} 2023-07-18 19:15:11,175 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 19:15:11,175 DEBUG [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1504): Waiting on 1588230740, 4c4de3646a3c1c3c6ecde78e36f1aea0 2023-07-18 19:15:11,175 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 19:15:11,175 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 19:15:11,175 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 19:15:11,175 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 19:15:11,175 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-18 19:15:11,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/quota/aa64e7d74e70db7acc3fcfc4f751f6be/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:11,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:11,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aa64e7d74e70db7acc3fcfc4f751f6be: 2023-07-18 19:15:11,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689707709619.aa64e7d74e70db7acc3fcfc4f751f6be. 2023-07-18 19:15:11,218 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0/.tmp/m/579fe33db84349398f55185c16feb7ab 2023-07-18 19:15:11,224 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/.tmp/info/b1484b9cd1e74ca8a1cb3be10fe11085 2023-07-18 19:15:11,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a/.tmp/info/082f82a40f53401e82a0200a9edfe771 2023-07-18 19:15:11,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0/.tmp/m/579fe33db84349398f55185c16feb7ab as hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0/m/579fe33db84349398f55185c16feb7ab 2023-07-18 19:15:11,233 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b1484b9cd1e74ca8a1cb3be10fe11085 2023-07-18 19:15:11,234 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 082f82a40f53401e82a0200a9edfe771 2023-07-18 19:15:11,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a/.tmp/info/082f82a40f53401e82a0200a9edfe771 as hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a/info/082f82a40f53401e82a0200a9edfe771 2023-07-18 19:15:11,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0/m/579fe33db84349398f55185c16feb7ab, entries=1, sequenceid=7, filesize=4.9 K 2023-07-18 19:15:11,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 4c4de3646a3c1c3c6ecde78e36f1aea0 in 74ms, sequenceid=7, compaction requested=false 2023-07-18 19:15:11,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 19:15:11,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 082f82a40f53401e82a0200a9edfe771 2023-07-18 19:15:11,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a/info/082f82a40f53401e82a0200a9edfe771, entries=3, sequenceid=8, filesize=5.0 K 2023-07-18 19:15:11,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 2ff39883d86ed5a2f624c02abe06614a in 81ms, sequenceid=8, compaction requested=false 2023-07-18 19:15:11,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 19:15:11,275 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/.tmp/rep_barrier/6e5ec6409d014890997870baac6be42a 2023-07-18 19:15:11,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/rsgroup/4c4de3646a3c1c3c6ecde78e36f1aea0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-18 19:15:11,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/namespace/2ff39883d86ed5a2f624c02abe06614a/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-18 19:15:11,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:15:11,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:11,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c4de3646a3c1c3c6ecde78e36f1aea0: 2023-07-18 19:15:11,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689707709264.4c4de3646a3c1c3c6ecde78e36f1aea0. 2023-07-18 19:15:11,280 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6e5ec6409d014890997870baac6be42a 2023-07-18 19:15:11,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:11,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2ff39883d86ed5a2f624c02abe06614a: 2023-07-18 19:15:11,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689707709259.2ff39883d86ed5a2f624c02abe06614a. 2023-07-18 19:15:11,304 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/.tmp/table/dc958740d65c4c0a87962d208471da67 2023-07-18 19:15:11,310 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dc958740d65c4c0a87962d208471da67 2023-07-18 19:15:11,311 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/.tmp/info/b1484b9cd1e74ca8a1cb3be10fe11085 as hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/info/b1484b9cd1e74ca8a1cb3be10fe11085 2023-07-18 19:15:11,316 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b1484b9cd1e74ca8a1cb3be10fe11085 2023-07-18 19:15:11,316 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/info/b1484b9cd1e74ca8a1cb3be10fe11085, entries=32, sequenceid=31, filesize=8.5 K 2023-07-18 19:15:11,317 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/.tmp/rep_barrier/6e5ec6409d014890997870baac6be42a as hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/rep_barrier/6e5ec6409d014890997870baac6be42a 2023-07-18 19:15:11,323 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6e5ec6409d014890997870baac6be42a 2023-07-18 19:15:11,323 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/rep_barrier/6e5ec6409d014890997870baac6be42a, entries=1, sequenceid=31, filesize=4.9 K 2023-07-18 19:15:11,324 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/.tmp/table/dc958740d65c4c0a87962d208471da67 as hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/table/dc958740d65c4c0a87962d208471da67 2023-07-18 19:15:11,330 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dc958740d65c4c0a87962d208471da67 2023-07-18 19:15:11,331 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/table/dc958740d65c4c0a87962d208471da67, entries=8, sequenceid=31, filesize=5.2 K 2023-07-18 19:15:11,331 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 156ms, sequenceid=31, compaction requested=false 2023-07-18 19:15:11,331 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 19:15:11,341 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-18 19:15:11,342 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:15:11,342 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 19:15:11,342 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 19:15:11,342 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 19:15:11,355 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39351,1689707708299; all regions closed. 2023-07-18 19:15:11,355 DEBUG [RS:1;jenkins-hbase4:39351] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 19:15:11,361 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34883,1689707708244; all regions closed. 2023-07-18 19:15:11,362 DEBUG [RS:0;jenkins-hbase4:34883] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 19:15:11,365 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/WALs/jenkins-hbase4.apache.org,39351,1689707708299/jenkins-hbase4.apache.org%2C39351%2C1689707708299.1689707708992 not finished, retry = 0 2023-07-18 19:15:11,371 DEBUG [RS:0;jenkins-hbase4:34883] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/oldWALs 2023-07-18 19:15:11,371 INFO [RS:0;jenkins-hbase4:34883] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34883%2C1689707708244:(num 1689707708992) 2023-07-18 19:15:11,371 DEBUG [RS:0;jenkins-hbase4:34883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,371 INFO [RS:0;jenkins-hbase4:34883] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:11,371 INFO [RS:0;jenkins-hbase4:34883] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:11,371 INFO [RS:0;jenkins-hbase4:34883] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:11,371 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:11,371 INFO [RS:0;jenkins-hbase4:34883] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:11,371 INFO [RS:0;jenkins-hbase4:34883] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:11,372 INFO [RS:0;jenkins-hbase4:34883] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34883 2023-07-18 19:15:11,375 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37007,1689707708360; all regions closed. 2023-07-18 19:15:11,375 DEBUG [RS:2;jenkins-hbase4:37007] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 19:15:11,377 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:11,377 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:11,378 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:11,378 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:11,378 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:11,378 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34883,1689707708244 2023-07-18 19:15:11,378 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:11,384 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34883,1689707708244] 2023-07-18 19:15:11,384 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34883,1689707708244; numProcessing=1 2023-07-18 19:15:11,385 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34883,1689707708244 already deleted, retry=false 2023-07-18 19:15:11,385 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34883,1689707708244 expired; onlineServers=2 2023-07-18 19:15:11,386 DEBUG [RS:2;jenkins-hbase4:37007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/oldWALs 2023-07-18 19:15:11,386 INFO [RS:2;jenkins-hbase4:37007] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37007%2C1689707708360.meta:.meta(num 1689707709191) 2023-07-18 19:15:11,392 DEBUG [RS:2;jenkins-hbase4:37007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/oldWALs 2023-07-18 19:15:11,392 INFO [RS:2;jenkins-hbase4:37007] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37007%2C1689707708360:(num 1689707708990) 2023-07-18 19:15:11,392 DEBUG [RS:2;jenkins-hbase4:37007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,392 INFO [RS:2;jenkins-hbase4:37007] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:11,392 INFO [RS:2;jenkins-hbase4:37007] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:11,392 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:11,393 INFO [RS:2;jenkins-hbase4:37007] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37007 2023-07-18 19:15:11,468 DEBUG [RS:1;jenkins-hbase4:39351] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/oldWALs 2023-07-18 19:15:11,468 INFO [RS:1;jenkins-hbase4:39351] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39351%2C1689707708299:(num 1689707708992) 2023-07-18 19:15:11,468 DEBUG [RS:1;jenkins-hbase4:39351] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,468 INFO [RS:1;jenkins-hbase4:39351] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:11,468 INFO [RS:1;jenkins-hbase4:39351] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:11,469 INFO [RS:1;jenkins-hbase4:39351] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:11,469 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:11,469 INFO [RS:1;jenkins-hbase4:39351] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:11,469 INFO [RS:1;jenkins-hbase4:39351] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:11,470 INFO [RS:1;jenkins-hbase4:39351] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39351 2023-07-18 19:15:11,484 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:11,484 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:34883-0x10179dc01d40001, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:11,484 INFO [RS:0;jenkins-hbase4:34883] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34883,1689707708244; zookeeper connection closed. 2023-07-18 19:15:11,485 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@49413d00] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@49413d00 2023-07-18 19:15:11,486 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:11,486 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:11,486 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39351,1689707708299 2023-07-18 19:15:11,486 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:11,486 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37007,1689707708360 2023-07-18 19:15:11,487 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37007,1689707708360] 2023-07-18 19:15:11,487 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37007,1689707708360; numProcessing=2 2023-07-18 19:15:11,489 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37007,1689707708360 already deleted, retry=false 2023-07-18 19:15:11,489 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37007,1689707708360 expired; onlineServers=1 2023-07-18 19:15:11,489 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39351,1689707708299] 2023-07-18 19:15:11,489 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39351,1689707708299; numProcessing=3 2023-07-18 19:15:11,490 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39351,1689707708299 already deleted, retry=false 2023-07-18 19:15:11,490 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39351,1689707708299 expired; onlineServers=0 2023-07-18 19:15:11,490 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42481,1689707708157' ***** 2023-07-18 19:15:11,490 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 19:15:11,491 DEBUG [M:0;jenkins-hbase4:42481] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@775ad94b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:11,491 INFO [M:0;jenkins-hbase4:42481] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:11,493 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:11,493 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:11,493 INFO [M:0;jenkins-hbase4:42481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5b426ad{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 19:15:11,494 INFO [M:0;jenkins-hbase4:42481] server.AbstractConnector(383): Stopped ServerConnector@1539694d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:11,494 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:11,494 INFO [M:0;jenkins-hbase4:42481] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:11,494 INFO [M:0;jenkins-hbase4:42481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6682e202{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:11,494 INFO [M:0;jenkins-hbase4:42481] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@49e4a08f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:11,495 INFO [M:0;jenkins-hbase4:42481] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42481,1689707708157 2023-07-18 19:15:11,495 INFO [M:0;jenkins-hbase4:42481] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42481,1689707708157; all regions closed. 2023-07-18 19:15:11,495 DEBUG [M:0;jenkins-hbase4:42481] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:11,495 INFO [M:0;jenkins-hbase4:42481] master.HMaster(1491): Stopping master jetty server 2023-07-18 19:15:11,495 INFO [M:0;jenkins-hbase4:42481] server.AbstractConnector(383): Stopped ServerConnector@249204b6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:11,496 DEBUG [M:0;jenkins-hbase4:42481] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 19:15:11,496 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 19:15:11,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707708691] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707708691,5,FailOnTimeoutGroup] 2023-07-18 19:15:11,496 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707708686] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707708686,5,FailOnTimeoutGroup] 2023-07-18 19:15:11,496 DEBUG [M:0;jenkins-hbase4:42481] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 19:15:11,497 INFO [M:0;jenkins-hbase4:42481] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 19:15:11,497 INFO [M:0;jenkins-hbase4:42481] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 19:15:11,498 INFO [M:0;jenkins-hbase4:42481] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:11,498 DEBUG [M:0;jenkins-hbase4:42481] master.HMaster(1512): Stopping service threads 2023-07-18 19:15:11,498 INFO [M:0;jenkins-hbase4:42481] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 19:15:11,498 ERROR [M:0;jenkins-hbase4:42481] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 19:15:11,498 INFO [M:0;jenkins-hbase4:42481] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 19:15:11,499 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 19:15:11,499 DEBUG [M:0;jenkins-hbase4:42481] zookeeper.ZKUtil(398): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 19:15:11,499 WARN [M:0;jenkins-hbase4:42481] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 19:15:11,499 INFO [M:0;jenkins-hbase4:42481] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 19:15:11,499 INFO [M:0;jenkins-hbase4:42481] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 19:15:11,500 DEBUG [M:0;jenkins-hbase4:42481] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 19:15:11,500 INFO [M:0;jenkins-hbase4:42481] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:11,500 DEBUG [M:0;jenkins-hbase4:42481] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:11,500 DEBUG [M:0;jenkins-hbase4:42481] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 19:15:11,500 DEBUG [M:0;jenkins-hbase4:42481] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:11,500 INFO [M:0;jenkins-hbase4:42481] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.11 KB 2023-07-18 19:15:11,515 INFO [M:0;jenkins-hbase4:42481] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f84fedc290024042821c00ef1cd885fd 2023-07-18 19:15:11,522 DEBUG [M:0;jenkins-hbase4:42481] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f84fedc290024042821c00ef1cd885fd as hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f84fedc290024042821c00ef1cd885fd 2023-07-18 19:15:11,529 INFO [M:0;jenkins-hbase4:42481] regionserver.HStore(1080): Added hdfs://localhost:37601/user/jenkins/test-data/5279069b-facc-ca1e-402b-49e4ebc73962/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f84fedc290024042821c00ef1cd885fd, entries=24, sequenceid=194, filesize=12.4 K 2023-07-18 19:15:11,530 INFO [M:0;jenkins-hbase4:42481] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95177, heapSize ~109.09 KB/111712, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=194, compaction requested=false 2023-07-18 19:15:11,532 INFO [M:0;jenkins-hbase4:42481] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:11,532 DEBUG [M:0;jenkins-hbase4:42481] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:15:11,540 INFO [M:0;jenkins-hbase4:42481] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 19:15:11,540 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:11,541 INFO [M:0;jenkins-hbase4:42481] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42481 2023-07-18 19:15:11,543 DEBUG [M:0;jenkins-hbase4:42481] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42481,1689707708157 already deleted, retry=false 2023-07-18 19:15:11,723 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:11,723 INFO [M:0;jenkins-hbase4:42481] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42481,1689707708157; zookeeper connection closed. 2023-07-18 19:15:11,723 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): master:42481-0x10179dc01d40000, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:11,823 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:11,823 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:39351-0x10179dc01d40002, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:11,823 INFO [RS:1;jenkins-hbase4:39351] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39351,1689707708299; zookeeper connection closed. 2023-07-18 19:15:11,824 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@67252490] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@67252490 2023-07-18 19:15:11,923 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:11,923 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): regionserver:37007-0x10179dc01d40003, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:11,923 INFO [RS:2;jenkins-hbase4:37007] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37007,1689707708360; zookeeper connection closed. 2023-07-18 19:15:11,924 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2d6de756] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2d6de756 2023-07-18 19:15:11,924 INFO [Listener at localhost/45101] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-18 19:15:11,924 WARN [Listener at localhost/45101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:11,945 INFO [Listener at localhost/45101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:12,053 WARN [BP-991946445-172.31.14.131-1689707707088 heartbeating to localhost/127.0.0.1:37601] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:12,053 WARN [BP-991946445-172.31.14.131-1689707707088 heartbeating to localhost/127.0.0.1:37601] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-991946445-172.31.14.131-1689707707088 (Datanode Uuid 11ca71b2-40d8-4f05-ad30-fa1eada172d9) service to localhost/127.0.0.1:37601 2023-07-18 19:15:12,054 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/dfs/data/data5/current/BP-991946445-172.31.14.131-1689707707088] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:12,054 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/dfs/data/data6/current/BP-991946445-172.31.14.131-1689707707088] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:12,056 WARN [Listener at localhost/45101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:12,065 INFO [Listener at localhost/45101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:12,168 WARN [BP-991946445-172.31.14.131-1689707707088 heartbeating to localhost/127.0.0.1:37601] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:12,168 WARN [BP-991946445-172.31.14.131-1689707707088 heartbeating to localhost/127.0.0.1:37601] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-991946445-172.31.14.131-1689707707088 (Datanode Uuid 7a5fe404-5b8d-41f2-8c5a-a605fcecbe34) service to localhost/127.0.0.1:37601 2023-07-18 19:15:12,169 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/dfs/data/data3/current/BP-991946445-172.31.14.131-1689707707088] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:12,169 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/dfs/data/data4/current/BP-991946445-172.31.14.131-1689707707088] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:12,171 WARN [Listener at localhost/45101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:12,203 INFO [Listener at localhost/45101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:12,307 WARN [BP-991946445-172.31.14.131-1689707707088 heartbeating to localhost/127.0.0.1:37601] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:12,307 WARN [BP-991946445-172.31.14.131-1689707707088 heartbeating to localhost/127.0.0.1:37601] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-991946445-172.31.14.131-1689707707088 (Datanode Uuid bedbe0fa-81c5-44f7-a306-704de14cbc79) service to localhost/127.0.0.1:37601 2023-07-18 19:15:12,307 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/dfs/data/data1/current/BP-991946445-172.31.14.131-1689707707088] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:12,308 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/cluster_5a2e9dd3-8626-37dc-b80b-2cd67e8d648f/dfs/data/data2/current/BP-991946445-172.31.14.131-1689707707088] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:12,318 INFO [Listener at localhost/45101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:12,438 INFO [Listener at localhost/45101] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 19:15:12,485 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 19:15:12,486 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 19:15:12,486 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.log.dir so I do NOT create it in target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9 2023-07-18 19:15:12,486 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/2bea4e76-08ab-af14-3523-3eafbb156a8e/hadoop.tmp.dir so I do NOT create it in target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9 2023-07-18 19:15:12,486 INFO [Listener at localhost/45101] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0, deleteOnExit=true 2023-07-18 19:15:12,486 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 19:15:12,486 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/test.cache.data in system properties and HBase conf 2023-07-18 19:15:12,486 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 19:15:12,487 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir in system properties and HBase conf 2023-07-18 19:15:12,487 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 19:15:12,487 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 19:15:12,487 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 19:15:12,487 DEBUG [Listener at localhost/45101] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 19:15:12,487 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 19:15:12,488 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 19:15:12,488 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 19:15:12,488 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 19:15:12,488 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 19:15:12,489 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 19:15:12,489 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 19:15:12,489 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 19:15:12,489 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 19:15:12,489 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/nfs.dump.dir in system properties and HBase conf 2023-07-18 19:15:12,489 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/java.io.tmpdir in system properties and HBase conf 2023-07-18 19:15:12,490 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 19:15:12,490 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 19:15:12,490 INFO [Listener at localhost/45101] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 19:15:12,495 WARN [Listener at localhost/45101] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 19:15:12,495 WARN [Listener at localhost/45101] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 19:15:12,535 DEBUG [Listener at localhost/45101-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10179dc01d4000a, quorum=127.0.0.1:59566, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 19:15:12,535 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10179dc01d4000a, quorum=127.0.0.1:59566, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 19:15:12,550 WARN [Listener at localhost/45101] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:15:12,553 INFO [Listener at localhost/45101] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:15:12,560 INFO [Listener at localhost/45101] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/java.io.tmpdir/Jetty_localhost_39115_hdfs____s94fix/webapp 2023-07-18 19:15:12,670 INFO [Listener at localhost/45101] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39115 2023-07-18 19:15:12,675 WARN [Listener at localhost/45101] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 19:15:12,675 WARN [Listener at localhost/45101] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 19:15:12,711 WARN [Listener at localhost/38571] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:15:12,723 WARN [Listener at localhost/38571] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 19:15:12,761 WARN [Listener at localhost/38571] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:15:12,764 WARN [Listener at localhost/38571] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:15:12,765 INFO [Listener at localhost/38571] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:15:12,775 INFO [Listener at localhost/38571] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/java.io.tmpdir/Jetty_localhost_46743_datanode____hz0tgb/webapp 2023-07-18 19:15:12,883 INFO [Listener at localhost/38571] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46743 2023-07-18 19:15:12,890 WARN [Listener at localhost/40771] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:15:12,911 WARN [Listener at localhost/40771] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:15:12,914 WARN [Listener at localhost/40771] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:15:12,915 INFO [Listener at localhost/40771] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:15:12,919 INFO [Listener at localhost/40771] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/java.io.tmpdir/Jetty_localhost_45673_datanode____.nlkcvc/webapp 2023-07-18 19:15:12,996 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb08b19121109c70b: Processing first storage report for DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062 from datanode 04ce664a-c5ee-43dc-9ad6-67256d4dad95 2023-07-18 19:15:12,996 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb08b19121109c70b: from storage DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062 node DatanodeRegistration(127.0.0.1:35167, datanodeUuid=04ce664a-c5ee-43dc-9ad6-67256d4dad95, infoPort=44817, infoSecurePort=0, ipcPort=40771, storageInfo=lv=-57;cid=testClusterID;nsid=69498127;c=1689707712498), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:12,996 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb08b19121109c70b: Processing first storage report for DS-44bafbd7-f534-47f9-badc-5557591142d1 from datanode 04ce664a-c5ee-43dc-9ad6-67256d4dad95 2023-07-18 19:15:12,996 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb08b19121109c70b: from storage DS-44bafbd7-f534-47f9-badc-5557591142d1 node DatanodeRegistration(127.0.0.1:35167, datanodeUuid=04ce664a-c5ee-43dc-9ad6-67256d4dad95, infoPort=44817, infoSecurePort=0, ipcPort=40771, storageInfo=lv=-57;cid=testClusterID;nsid=69498127;c=1689707712498), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:13,024 INFO [Listener at localhost/40771] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45673 2023-07-18 19:15:13,031 WARN [Listener at localhost/43919] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:15:13,049 WARN [Listener at localhost/43919] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 19:15:13,051 WARN [Listener at localhost/43919] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 19:15:13,052 INFO [Listener at localhost/43919] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 19:15:13,057 INFO [Listener at localhost/43919] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/java.io.tmpdir/Jetty_localhost_44253_datanode____.r4qatl/webapp 2023-07-18 19:15:13,120 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb15cae5f72bb1290: Processing first storage report for DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0 from datanode 5c030bd4-bd7e-4b72-8ba7-967bcedafe25 2023-07-18 19:15:13,120 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb15cae5f72bb1290: from storage DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0 node DatanodeRegistration(127.0.0.1:33529, datanodeUuid=5c030bd4-bd7e-4b72-8ba7-967bcedafe25, infoPort=44019, infoSecurePort=0, ipcPort=43919, storageInfo=lv=-57;cid=testClusterID;nsid=69498127;c=1689707712498), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:13,120 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb15cae5f72bb1290: Processing first storage report for DS-2966b1cb-83e0-4d93-8914-8d8bc2a54d40 from datanode 5c030bd4-bd7e-4b72-8ba7-967bcedafe25 2023-07-18 19:15:13,120 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb15cae5f72bb1290: from storage DS-2966b1cb-83e0-4d93-8914-8d8bc2a54d40 node DatanodeRegistration(127.0.0.1:33529, datanodeUuid=5c030bd4-bd7e-4b72-8ba7-967bcedafe25, infoPort=44019, infoSecurePort=0, ipcPort=43919, storageInfo=lv=-57;cid=testClusterID;nsid=69498127;c=1689707712498), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 19:15:13,157 INFO [Listener at localhost/43919] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44253 2023-07-18 19:15:13,171 WARN [Listener at localhost/39045] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 19:15:13,267 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc4146dad6ecf66b6: Processing first storage report for DS-4c592f60-35b9-425f-b4c5-280fa2071c1d from datanode e1e890d4-3692-4053-92e3-d1d407d6aa08 2023-07-18 19:15:13,267 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc4146dad6ecf66b6: from storage DS-4c592f60-35b9-425f-b4c5-280fa2071c1d node DatanodeRegistration(127.0.0.1:37693, datanodeUuid=e1e890d4-3692-4053-92e3-d1d407d6aa08, infoPort=43625, infoSecurePort=0, ipcPort=39045, storageInfo=lv=-57;cid=testClusterID;nsid=69498127;c=1689707712498), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:13,267 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc4146dad6ecf66b6: Processing first storage report for DS-931de0d9-56a2-4dd0-a0ac-0c08bd52835a from datanode e1e890d4-3692-4053-92e3-d1d407d6aa08 2023-07-18 19:15:13,267 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc4146dad6ecf66b6: from storage DS-931de0d9-56a2-4dd0-a0ac-0c08bd52835a node DatanodeRegistration(127.0.0.1:37693, datanodeUuid=e1e890d4-3692-4053-92e3-d1d407d6aa08, infoPort=43625, infoSecurePort=0, ipcPort=39045, storageInfo=lv=-57;cid=testClusterID;nsid=69498127;c=1689707712498), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 19:15:13,282 DEBUG [Listener at localhost/39045] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9 2023-07-18 19:15:13,285 INFO [Listener at localhost/39045] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/zookeeper_0, clientPort=55220, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 19:15:13,286 INFO [Listener at localhost/39045] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55220 2023-07-18 19:15:13,286 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,287 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,304 INFO [Listener at localhost/39045] util.FSUtils(471): Created version file at hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884 with version=8 2023-07-18 19:15:13,305 INFO [Listener at localhost/39045] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:44967/user/jenkins/test-data/9d2b5bb1-3268-3c14-f168-a4f7ea8609d3/hbase-staging 2023-07-18 19:15:13,306 DEBUG [Listener at localhost/39045] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 19:15:13,306 DEBUG [Listener at localhost/39045] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 19:15:13,306 DEBUG [Listener at localhost/39045] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 19:15:13,306 DEBUG [Listener at localhost/39045] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 19:15:13,306 INFO [Listener at localhost/39045] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:13,307 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,307 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,307 INFO [Listener at localhost/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:13,307 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,307 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:13,307 INFO [Listener at localhost/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:13,308 INFO [Listener at localhost/39045] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43365 2023-07-18 19:15:13,308 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,309 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,310 INFO [Listener at localhost/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43365 connecting to ZooKeeper ensemble=127.0.0.1:55220 2023-07-18 19:15:13,317 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:433650x0, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:13,317 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43365-0x10179dc16070000 connected 2023-07-18 19:15:13,333 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:13,334 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:13,334 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:13,338 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43365 2023-07-18 19:15:13,339 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43365 2023-07-18 19:15:13,339 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43365 2023-07-18 19:15:13,341 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43365 2023-07-18 19:15:13,341 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43365 2023-07-18 19:15:13,343 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:13,343 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:13,343 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:13,344 INFO [Listener at localhost/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 19:15:13,344 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:13,344 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:13,344 INFO [Listener at localhost/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:13,344 INFO [Listener at localhost/39045] http.HttpServer(1146): Jetty bound to port 43377 2023-07-18 19:15:13,344 INFO [Listener at localhost/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:13,346 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,347 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@212fee63{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:13,347 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,347 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@40c50e37{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:13,355 INFO [Listener at localhost/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:13,356 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:13,357 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:13,357 INFO [Listener at localhost/39045] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 19:15:13,359 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,360 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7e676d97{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 19:15:13,361 INFO [Listener at localhost/39045] server.AbstractConnector(333): Started ServerConnector@f1a837d{HTTP/1.1, (http/1.1)}{0.0.0.0:43377} 2023-07-18 19:15:13,361 INFO [Listener at localhost/39045] server.Server(415): Started @42065ms 2023-07-18 19:15:13,362 INFO [Listener at localhost/39045] master.HMaster(444): hbase.rootdir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884, hbase.cluster.distributed=false 2023-07-18 19:15:13,381 INFO [Listener at localhost/39045] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:13,381 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,381 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,381 INFO [Listener at localhost/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:13,382 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,382 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:13,382 INFO [Listener at localhost/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:13,384 INFO [Listener at localhost/39045] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42899 2023-07-18 19:15:13,384 INFO [Listener at localhost/39045] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:15:13,386 DEBUG [Listener at localhost/39045] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:15:13,387 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,388 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,390 INFO [Listener at localhost/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42899 connecting to ZooKeeper ensemble=127.0.0.1:55220 2023-07-18 19:15:13,393 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:428990x0, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:13,394 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42899-0x10179dc16070001 connected 2023-07-18 19:15:13,394 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:13,395 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:13,395 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:13,397 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42899 2023-07-18 19:15:13,397 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42899 2023-07-18 19:15:13,398 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42899 2023-07-18 19:15:13,398 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42899 2023-07-18 19:15:13,399 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42899 2023-07-18 19:15:13,401 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:13,401 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:13,401 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:13,401 INFO [Listener at localhost/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:15:13,402 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:13,402 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:13,402 INFO [Listener at localhost/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:13,402 INFO [Listener at localhost/39045] http.HttpServer(1146): Jetty bound to port 41441 2023-07-18 19:15:13,402 INFO [Listener at localhost/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:13,404 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,404 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@47397f67{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:13,405 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,405 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4185f02{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:13,409 INFO [Listener at localhost/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:13,410 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:13,410 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:13,410 INFO [Listener at localhost/39045] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:15:13,411 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,411 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@11bb793a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:13,412 INFO [Listener at localhost/39045] server.AbstractConnector(333): Started ServerConnector@174373f0{HTTP/1.1, (http/1.1)}{0.0.0.0:41441} 2023-07-18 19:15:13,412 INFO [Listener at localhost/39045] server.Server(415): Started @42116ms 2023-07-18 19:15:13,423 INFO [Listener at localhost/39045] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:13,423 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,423 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,423 INFO [Listener at localhost/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:13,423 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,423 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:13,423 INFO [Listener at localhost/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:13,425 INFO [Listener at localhost/39045] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46825 2023-07-18 19:15:13,426 INFO [Listener at localhost/39045] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:15:13,427 DEBUG [Listener at localhost/39045] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:15:13,427 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,428 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,429 INFO [Listener at localhost/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46825 connecting to ZooKeeper ensemble=127.0.0.1:55220 2023-07-18 19:15:13,432 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:468250x0, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:13,433 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:468250x0, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:13,434 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46825-0x10179dc16070002 connected 2023-07-18 19:15:13,434 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:13,435 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:13,435 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46825 2023-07-18 19:15:13,436 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46825 2023-07-18 19:15:13,436 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46825 2023-07-18 19:15:13,436 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46825 2023-07-18 19:15:13,436 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46825 2023-07-18 19:15:13,438 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:13,438 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:13,438 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:13,439 INFO [Listener at localhost/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:15:13,439 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:13,439 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:13,439 INFO [Listener at localhost/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:13,439 INFO [Listener at localhost/39045] http.HttpServer(1146): Jetty bound to port 42239 2023-07-18 19:15:13,439 INFO [Listener at localhost/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:13,441 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,441 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@472a49f2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:13,441 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,441 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a8ac131{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:13,445 INFO [Listener at localhost/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:13,446 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:13,446 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:13,446 INFO [Listener at localhost/39045] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:15:13,447 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,447 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@14d5be5b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:13,449 INFO [Listener at localhost/39045] server.AbstractConnector(333): Started ServerConnector@5aabd8da{HTTP/1.1, (http/1.1)}{0.0.0.0:42239} 2023-07-18 19:15:13,450 INFO [Listener at localhost/39045] server.Server(415): Started @42153ms 2023-07-18 19:15:13,460 INFO [Listener at localhost/39045] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:13,461 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,461 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,461 INFO [Listener at localhost/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:13,461 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:13,461 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:13,461 INFO [Listener at localhost/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:13,462 INFO [Listener at localhost/39045] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38221 2023-07-18 19:15:13,462 INFO [Listener at localhost/39045] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:15:13,463 DEBUG [Listener at localhost/39045] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:15:13,463 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,464 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,465 INFO [Listener at localhost/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38221 connecting to ZooKeeper ensemble=127.0.0.1:55220 2023-07-18 19:15:13,469 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:382210x0, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:13,470 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38221-0x10179dc16070003 connected 2023-07-18 19:15:13,470 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:13,471 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:13,471 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:13,473 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38221 2023-07-18 19:15:13,473 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38221 2023-07-18 19:15:13,474 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38221 2023-07-18 19:15:13,474 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38221 2023-07-18 19:15:13,474 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38221 2023-07-18 19:15:13,476 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:13,476 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:13,476 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:13,476 INFO [Listener at localhost/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:15:13,476 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:13,476 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:13,476 INFO [Listener at localhost/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:13,477 INFO [Listener at localhost/39045] http.HttpServer(1146): Jetty bound to port 46109 2023-07-18 19:15:13,477 INFO [Listener at localhost/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:13,480 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,481 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4efaa00c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:13,481 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,481 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@cf5170{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:13,485 INFO [Listener at localhost/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:13,486 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:13,486 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:13,486 INFO [Listener at localhost/39045] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:15:13,487 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:13,487 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4fdc91ed{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:13,488 INFO [Listener at localhost/39045] server.AbstractConnector(333): Started ServerConnector@38def70b{HTTP/1.1, (http/1.1)}{0.0.0.0:46109} 2023-07-18 19:15:13,489 INFO [Listener at localhost/39045] server.Server(415): Started @42192ms 2023-07-18 19:15:13,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:13,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4d23f4a6{HTTP/1.1, (http/1.1)}{0.0.0.0:39707} 2023-07-18 19:15:13,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42201ms 2023-07-18 19:15:13,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:13,500 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 19:15:13,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:13,502 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:13,502 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:13,502 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:13,502 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:13,502 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:13,504 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 19:15:13,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43365,1689707713306 from backup master directory 2023-07-18 19:15:13,505 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 19:15:13,506 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:13,506 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 19:15:13,506 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:13,506 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:13,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/hbase.id with ID: 0b529690-565c-468f-a912-a07c087b6e62 2023-07-18 19:15:13,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:13,537 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:13,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1e751761 to 127.0.0.1:55220 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:13,559 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37703fb2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:13,559 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:13,560 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 19:15:13,560 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:13,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store-tmp 2023-07-18 19:15:13,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:13,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 19:15:13,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:13,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:13,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 19:15:13,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:13,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:13,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:15:13,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/WALs/jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:13,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43365%2C1689707713306, suffix=, logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/WALs/jenkins-hbase4.apache.org,43365,1689707713306, archiveDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/oldWALs, maxLogs=10 2023-07-18 19:15:13,591 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK] 2023-07-18 19:15:13,591 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK] 2023-07-18 19:15:13,594 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK] 2023-07-18 19:15:13,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/WALs/jenkins-hbase4.apache.org,43365,1689707713306/jenkins-hbase4.apache.org%2C43365%2C1689707713306.1689707713576 2023-07-18 19:15:13,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK], DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK], DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK]] 2023-07-18 19:15:13,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:13,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:13,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:13,597 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:13,599 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:13,600 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 19:15:13,601 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 19:15:13,601 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:13,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:13,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:13,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 19:15:13,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:13,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10987706720, jitterRate=0.02330993115901947}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:13,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:15:13,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 19:15:13,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 19:15:13,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 19:15:13,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 19:15:13,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 19:15:13,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 19:15:13,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 19:15:13,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 19:15:13,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 19:15:13,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 19:15:13,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 19:15:13,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 19:15:13,617 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:13,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 19:15:13,618 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 19:15:13,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 19:15:13,620 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:13,620 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:13,620 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:13,620 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:13,620 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:13,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43365,1689707713306, sessionid=0x10179dc16070000, setting cluster-up flag (Was=false) 2023-07-18 19:15:13,631 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:13,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 19:15:13,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:13,640 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:13,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 19:15:13,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:13,646 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.hbase-snapshot/.tmp 2023-07-18 19:15:13,647 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 19:15:13,647 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 19:15:13,647 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 19:15:13,648 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:13,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 19:15:13,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 19:15:13,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 19:15:13,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 19:15:13,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 19:15:13,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 19:15:13,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:15:13,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:15:13,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:15:13,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 19:15:13,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 19:15:13,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:13,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689707743664 2023-07-18 19:15:13,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 19:15:13,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 19:15:13,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 19:15:13,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 19:15:13,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 19:15:13,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 19:15:13,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,665 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 19:15:13,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 19:15:13,665 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 19:15:13,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 19:15:13,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 19:15:13,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 19:15:13,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 19:15:13,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707713666,5,FailOnTimeoutGroup] 2023-07-18 19:15:13,667 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707713666,5,FailOnTimeoutGroup] 2023-07-18 19:15:13,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,667 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:13,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 19:15:13,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,678 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:13,678 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:13,679 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884 2023-07-18 19:15:13,689 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:13,711 INFO [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(951): ClusterId : 0b529690-565c-468f-a912-a07c087b6e62 2023-07-18 19:15:13,736 DEBUG [RS:0;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:15:13,737 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 19:15:13,739 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(951): ClusterId : 0b529690-565c-468f-a912-a07c087b6e62 2023-07-18 19:15:13,739 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(951): ClusterId : 0b529690-565c-468f-a912-a07c087b6e62 2023-07-18 19:15:13,740 DEBUG [RS:2;jenkins-hbase4:38221] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:15:13,741 DEBUG [RS:0;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:15:13,741 DEBUG [RS:1;jenkins-hbase4:46825] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:15:13,741 DEBUG [RS:0;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:15:13,741 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/info 2023-07-18 19:15:13,742 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 19:15:13,742 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:13,743 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 19:15:13,743 DEBUG [RS:2;jenkins-hbase4:38221] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:15:13,744 DEBUG [RS:2;jenkins-hbase4:38221] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:15:13,744 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:15:13,744 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 19:15:13,745 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:13,745 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 19:15:13,747 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/table 2023-07-18 19:15:13,747 DEBUG [RS:0;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:15:13,748 DEBUG [RS:1;jenkins-hbase4:46825] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:15:13,748 DEBUG [RS:1;jenkins-hbase4:46825] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:15:13,748 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 19:15:13,748 DEBUG [RS:0;jenkins-hbase4:42899] zookeeper.ReadOnlyZKClient(139): Connect 0x665b7fd1 to 127.0.0.1:55220 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:13,751 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:13,753 DEBUG [RS:2;jenkins-hbase4:38221] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:15:13,753 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740 2023-07-18 19:15:13,754 DEBUG [RS:1;jenkins-hbase4:46825] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:15:13,755 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740 2023-07-18 19:15:13,755 DEBUG [RS:0;jenkins-hbase4:42899] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@689c9d6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:13,756 DEBUG [RS:0;jenkins-hbase4:42899] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ca314a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:13,756 DEBUG [RS:1;jenkins-hbase4:46825] zookeeper.ReadOnlyZKClient(139): Connect 0x0254b24f to 127.0.0.1:55220 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:13,757 DEBUG [RS:2;jenkins-hbase4:38221] zookeeper.ReadOnlyZKClient(139): Connect 0x23953b41 to 127.0.0.1:55220 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:13,758 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 19:15:13,760 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 19:15:13,765 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:13,767 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10963167840, jitterRate=0.021024569869041443}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 19:15:13,767 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 19:15:13,767 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 19:15:13,767 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 19:15:13,767 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 19:15:13,767 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 19:15:13,767 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 19:15:13,769 DEBUG [RS:0;jenkins-hbase4:42899] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42899 2023-07-18 19:15:13,769 INFO [RS:0;jenkins-hbase4:42899] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:15:13,769 INFO [RS:0;jenkins-hbase4:42899] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:15:13,769 DEBUG [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:15:13,769 INFO [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43365,1689707713306 with isa=jenkins-hbase4.apache.org/172.31.14.131:42899, startcode=1689707713380 2023-07-18 19:15:13,769 DEBUG [RS:0;jenkins-hbase4:42899] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:15:13,771 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 19:15:13,771 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 19:15:13,771 DEBUG [RS:1;jenkins-hbase4:46825] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47563c61, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:13,771 DEBUG [RS:1;jenkins-hbase4:46825] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29dd7095, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:13,771 DEBUG [RS:2;jenkins-hbase4:38221] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6697b157, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:13,772 DEBUG [RS:2;jenkins-hbase4:38221] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25b8896c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:13,772 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 19:15:13,772 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 19:15:13,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 19:15:13,772 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47261, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:15:13,774 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 19:15:13,775 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43365] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,775 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:13,776 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 19:15:13,776 DEBUG [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884 2023-07-18 19:15:13,776 DEBUG [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38571 2023-07-18 19:15:13,776 DEBUG [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43377 2023-07-18 19:15:13,776 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 19:15:13,778 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:13,778 DEBUG [RS:0;jenkins-hbase4:42899] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,778 WARN [RS:0;jenkins-hbase4:42899] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:13,778 INFO [RS:0;jenkins-hbase4:42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:13,778 DEBUG [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,782 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38221 2023-07-18 19:15:13,782 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46825 2023-07-18 19:15:13,782 INFO [RS:2;jenkins-hbase4:38221] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:15:13,782 INFO [RS:1;jenkins-hbase4:46825] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:15:13,782 INFO [RS:1;jenkins-hbase4:46825] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:15:13,782 INFO [RS:2;jenkins-hbase4:38221] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:15:13,782 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:15:13,782 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:15:13,784 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43365,1689707713306 with isa=jenkins-hbase4.apache.org/172.31.14.131:46825, startcode=1689707713423 2023-07-18 19:15:13,784 DEBUG [RS:1;jenkins-hbase4:46825] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:15:13,784 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43365,1689707713306 with isa=jenkins-hbase4.apache.org/172.31.14.131:38221, startcode=1689707713460 2023-07-18 19:15:13,784 DEBUG [RS:2;jenkins-hbase4:38221] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:15:13,784 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42899,1689707713380] 2023-07-18 19:15:13,790 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58901, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:15:13,790 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43365] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,791 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:13,791 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 19:15:13,792 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884 2023-07-18 19:15:13,792 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38571 2023-07-18 19:15:13,792 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43377 2023-07-18 19:15:13,792 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52689, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:15:13,792 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43365] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:13,792 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:13,792 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 19:15:13,792 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884 2023-07-18 19:15:13,792 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38571 2023-07-18 19:15:13,792 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43377 2023-07-18 19:15:13,796 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:13,796 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:13,796 DEBUG [RS:0;jenkins-hbase4:42899] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,797 DEBUG [RS:1;jenkins-hbase4:46825] zookeeper.ZKUtil(162): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,797 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38221,1689707713460] 2023-07-18 19:15:13,797 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46825,1689707713423] 2023-07-18 19:15:13,797 WARN [RS:1;jenkins-hbase4:46825] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:13,797 DEBUG [RS:2;jenkins-hbase4:38221] zookeeper.ZKUtil(162): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:13,797 INFO [RS:1;jenkins-hbase4:46825] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:13,797 WARN [RS:2;jenkins-hbase4:38221] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:13,797 INFO [RS:2;jenkins-hbase4:38221] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:13,798 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,798 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:13,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,799 DEBUG [RS:0;jenkins-hbase4:42899] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:15:13,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:13,799 INFO [RS:0;jenkins-hbase4:42899] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:15:13,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,804 INFO [RS:0;jenkins-hbase4:42899] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:15:13,808 INFO [RS:0;jenkins-hbase4:42899] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:15:13,808 INFO [RS:0;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,811 INFO [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:15:13,812 INFO [RS:0;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,813 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,813 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,813 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,813 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,813 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,813 DEBUG [RS:1;jenkins-hbase4:46825] zookeeper.ZKUtil(162): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,813 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:13,813 DEBUG [RS:2;jenkins-hbase4:38221] zookeeper.ZKUtil(162): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,814 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,814 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,814 DEBUG [RS:1;jenkins-hbase4:46825] zookeeper.ZKUtil(162): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:13,814 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,814 DEBUG [RS:2;jenkins-hbase4:38221] zookeeper.ZKUtil(162): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:13,814 DEBUG [RS:0;jenkins-hbase4:42899] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,814 DEBUG [RS:1;jenkins-hbase4:46825] zookeeper.ZKUtil(162): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,814 DEBUG [RS:2;jenkins-hbase4:38221] zookeeper.ZKUtil(162): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,815 INFO [RS:0;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,815 INFO [RS:0;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,815 INFO [RS:0;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,816 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:15:13,816 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:15:13,816 INFO [RS:1;jenkins-hbase4:46825] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:15:13,816 INFO [RS:2;jenkins-hbase4:38221] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:15:13,823 INFO [RS:1;jenkins-hbase4:46825] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:15:13,823 INFO [RS:2;jenkins-hbase4:38221] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:15:13,823 INFO [RS:1;jenkins-hbase4:46825] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:15:13,823 INFO [RS:2;jenkins-hbase4:38221] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:15:13,823 INFO [RS:1;jenkins-hbase4:46825] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,823 INFO [RS:2;jenkins-hbase4:38221] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,823 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:15:13,823 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:15:13,826 INFO [RS:1;jenkins-hbase4:46825] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,826 INFO [RS:2;jenkins-hbase4:38221] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,826 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,826 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,826 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,826 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,826 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,826 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:13,827 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:13,827 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:1;jenkins-hbase4:46825] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,827 DEBUG [RS:2;jenkins-hbase4:38221] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:13,834 INFO [RS:2;jenkins-hbase4:38221] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,834 INFO [RS:1;jenkins-hbase4:46825] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,834 INFO [RS:2;jenkins-hbase4:38221] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,834 INFO [RS:1;jenkins-hbase4:46825] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,834 INFO [RS:2;jenkins-hbase4:38221] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,834 INFO [RS:1;jenkins-hbase4:46825] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,839 INFO [RS:0;jenkins-hbase4:42899] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:15:13,839 INFO [RS:0;jenkins-hbase4:42899] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42899,1689707713380-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,845 INFO [RS:2;jenkins-hbase4:38221] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:15:13,845 INFO [RS:2;jenkins-hbase4:38221] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38221,1689707713460-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,845 INFO [RS:1;jenkins-hbase4:46825] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:15:13,846 INFO [RS:1;jenkins-hbase4:46825] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46825,1689707713423-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:13,850 INFO [RS:0;jenkins-hbase4:42899] regionserver.Replication(203): jenkins-hbase4.apache.org,42899,1689707713380 started 2023-07-18 19:15:13,850 INFO [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42899,1689707713380, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42899, sessionid=0x10179dc16070001 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42899,1689707713380' 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:13,851 DEBUG [RS:0;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42899,1689707713380' 2023-07-18 19:15:13,852 DEBUG [RS:0;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:15:13,852 DEBUG [RS:0;jenkins-hbase4:42899] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:15:13,852 DEBUG [RS:0;jenkins-hbase4:42899] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:15:13,852 INFO [RS:0;jenkins-hbase4:42899] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 19:15:13,852 INFO [RS:0;jenkins-hbase4:42899] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 19:15:13,856 INFO [RS:2;jenkins-hbase4:38221] regionserver.Replication(203): jenkins-hbase4.apache.org,38221,1689707713460 started 2023-07-18 19:15:13,856 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38221,1689707713460, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38221, sessionid=0x10179dc16070003 2023-07-18 19:15:13,856 DEBUG [RS:2;jenkins-hbase4:38221] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:15:13,856 DEBUG [RS:2;jenkins-hbase4:38221] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:13,856 DEBUG [RS:2;jenkins-hbase4:38221] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38221,1689707713460' 2023-07-18 19:15:13,856 DEBUG [RS:2;jenkins-hbase4:38221] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:15:13,856 DEBUG [RS:2;jenkins-hbase4:38221] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:15:13,857 DEBUG [RS:2;jenkins-hbase4:38221] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:15:13,857 DEBUG [RS:2;jenkins-hbase4:38221] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:15:13,857 DEBUG [RS:2;jenkins-hbase4:38221] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:13,857 DEBUG [RS:2;jenkins-hbase4:38221] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38221,1689707713460' 2023-07-18 19:15:13,857 DEBUG [RS:2;jenkins-hbase4:38221] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:15:13,857 DEBUG [RS:2;jenkins-hbase4:38221] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:15:13,857 DEBUG [RS:2;jenkins-hbase4:38221] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:15:13,857 INFO [RS:2;jenkins-hbase4:38221] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 19:15:13,857 INFO [RS:2;jenkins-hbase4:38221] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 19:15:13,860 INFO [RS:1;jenkins-hbase4:46825] regionserver.Replication(203): jenkins-hbase4.apache.org,46825,1689707713423 started 2023-07-18 19:15:13,860 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46825,1689707713423, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46825, sessionid=0x10179dc16070002 2023-07-18 19:15:13,860 DEBUG [RS:1;jenkins-hbase4:46825] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:15:13,860 DEBUG [RS:1;jenkins-hbase4:46825] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,860 DEBUG [RS:1;jenkins-hbase4:46825] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46825,1689707713423' 2023-07-18 19:15:13,860 DEBUG [RS:1;jenkins-hbase4:46825] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:15:13,860 DEBUG [RS:1;jenkins-hbase4:46825] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:15:13,861 DEBUG [RS:1;jenkins-hbase4:46825] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:15:13,861 DEBUG [RS:1;jenkins-hbase4:46825] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:15:13,861 DEBUG [RS:1;jenkins-hbase4:46825] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,861 DEBUG [RS:1;jenkins-hbase4:46825] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46825,1689707713423' 2023-07-18 19:15:13,861 DEBUG [RS:1;jenkins-hbase4:46825] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:15:13,861 DEBUG [RS:1;jenkins-hbase4:46825] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:15:13,861 DEBUG [RS:1;jenkins-hbase4:46825] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:15:13,861 INFO [RS:1;jenkins-hbase4:46825] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 19:15:13,862 INFO [RS:1;jenkins-hbase4:46825] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 19:15:13,927 DEBUG [jenkins-hbase4:43365] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 19:15:13,927 DEBUG [jenkins-hbase4:43365] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:13,927 DEBUG [jenkins-hbase4:43365] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:13,927 DEBUG [jenkins-hbase4:43365] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:13,927 DEBUG [jenkins-hbase4:43365] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:13,927 DEBUG [jenkins-hbase4:43365] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:13,928 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46825,1689707713423, state=OPENING 2023-07-18 19:15:13,930 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 19:15:13,932 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:13,933 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46825,1689707713423}] 2023-07-18 19:15:13,933 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:15:13,954 WARN [ReadOnlyZKClient-127.0.0.1:55220@0x1e751761] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 19:15:13,954 INFO [RS:0;jenkins-hbase4:42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42899%2C1689707713380, suffix=, logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,42899,1689707713380, archiveDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs, maxLogs=32 2023-07-18 19:15:13,954 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43365,1689707713306] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:15:13,956 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45278, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:15:13,957 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46825] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:45278 deadline: 1689707773957, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:13,959 INFO [RS:2;jenkins-hbase4:38221] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38221%2C1689707713460, suffix=, logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,38221,1689707713460, archiveDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs, maxLogs=32 2023-07-18 19:15:13,963 INFO [RS:1;jenkins-hbase4:46825] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46825%2C1689707713423, suffix=, logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,46825,1689707713423, archiveDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs, maxLogs=32 2023-07-18 19:15:13,975 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK] 2023-07-18 19:15:13,975 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK] 2023-07-18 19:15:13,975 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK] 2023-07-18 19:15:13,982 INFO [RS:0;jenkins-hbase4:42899] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,42899,1689707713380/jenkins-hbase4.apache.org%2C42899%2C1689707713380.1689707713955 2023-07-18 19:15:13,982 DEBUG [RS:0;jenkins-hbase4:42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK], DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK], DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK]] 2023-07-18 19:15:13,993 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK] 2023-07-18 19:15:13,993 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK] 2023-07-18 19:15:13,999 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK] 2023-07-18 19:15:14,001 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK] 2023-07-18 19:15:14,001 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK] 2023-07-18 19:15:14,003 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK] 2023-07-18 19:15:14,007 INFO [RS:2;jenkins-hbase4:38221] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,38221,1689707713460/jenkins-hbase4.apache.org%2C38221%2C1689707713460.1689707713960 2023-07-18 19:15:14,010 INFO [RS:1;jenkins-hbase4:46825] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,46825,1689707713423/jenkins-hbase4.apache.org%2C46825%2C1689707713423.1689707713964 2023-07-18 19:15:14,010 DEBUG [RS:2;jenkins-hbase4:38221] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK], DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK], DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK]] 2023-07-18 19:15:14,011 DEBUG [RS:1;jenkins-hbase4:46825] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK], DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK], DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK]] 2023-07-18 19:15:14,087 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:14,089 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:15:14,090 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45282, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:15:14,094 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 19:15:14,094 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:14,096 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46825%2C1689707713423.meta, suffix=.meta, logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,46825,1689707713423, archiveDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs, maxLogs=32 2023-07-18 19:15:14,116 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK] 2023-07-18 19:15:14,118 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK] 2023-07-18 19:15:14,118 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK] 2023-07-18 19:15:14,120 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,46825,1689707713423/jenkins-hbase4.apache.org%2C46825%2C1689707713423.meta.1689707714096.meta 2023-07-18 19:15:14,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK], DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK], DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK]] 2023-07-18 19:15:14,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:14,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:15:14,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 19:15:14,120 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 19:15:14,120 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 19:15:14,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:14,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 19:15:14,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 19:15:14,122 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 19:15:14,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/info 2023-07-18 19:15:14,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/info 2023-07-18 19:15:14,123 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 19:15:14,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:14,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 19:15:14,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:15:14,127 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/rep_barrier 2023-07-18 19:15:14,127 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 19:15:14,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:14,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 19:15:14,129 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/table 2023-07-18 19:15:14,129 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/table 2023-07-18 19:15:14,129 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 19:15:14,130 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:14,130 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740 2023-07-18 19:15:14,131 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740 2023-07-18 19:15:14,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 19:15:14,135 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 19:15:14,136 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9609036640, jitterRate=-0.1050887256860733}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 19:15:14,136 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 19:15:14,137 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689707714087 2023-07-18 19:15:14,142 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 19:15:14,142 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 19:15:14,143 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46825,1689707713423, state=OPEN 2023-07-18 19:15:14,144 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 19:15:14,144 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 19:15:14,145 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 19:15:14,146 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46825,1689707713423 in 211 msec 2023-07-18 19:15:14,147 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 19:15:14,147 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 374 msec 2023-07-18 19:15:14,151 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 499 msec 2023-07-18 19:15:14,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689707714151, completionTime=-1 2023-07-18 19:15:14,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 19:15:14,151 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 19:15:14,156 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 19:15:14,156 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689707774156 2023-07-18 19:15:14,156 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689707834156 2023-07-18 19:15:14,156 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-18 19:15:14,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43365,1689707713306-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43365,1689707713306-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43365,1689707713306-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43365, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 19:15:14,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:14,162 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 19:15:14,162 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 19:15:14,163 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:14,164 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:14,165 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/hbase/namespace/b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,166 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/hbase/namespace/b9272c44f2ba649af542994b09338576 empty. 2023-07-18 19:15:14,166 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/hbase/namespace/b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,166 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 19:15:14,179 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:14,180 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b9272c44f2ba649af542994b09338576, NAME => 'hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp 2023-07-18 19:15:14,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:14,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b9272c44f2ba649af542994b09338576, disabling compactions & flushes 2023-07-18 19:15:14,189 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:14,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:14,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. after waiting 0 ms 2023-07-18 19:15:14,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:14,189 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:14,189 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b9272c44f2ba649af542994b09338576: 2023-07-18 19:15:14,192 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:14,193 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707714193"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707714193"}]},"ts":"1689707714193"} 2023-07-18 19:15:14,195 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:15:14,196 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:14,196 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707714196"}]},"ts":"1689707714196"} 2023-07-18 19:15:14,197 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 19:15:14,200 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:14,200 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:14,200 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:14,201 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:14,201 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:14,201 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b9272c44f2ba649af542994b09338576, ASSIGN}] 2023-07-18 19:15:14,202 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b9272c44f2ba649af542994b09338576, ASSIGN 2023-07-18 19:15:14,203 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b9272c44f2ba649af542994b09338576, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38221,1689707713460; forceNewPlan=false, retain=false 2023-07-18 19:15:14,261 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43365,1689707713306] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:14,263 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43365,1689707713306] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 19:15:14,265 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:14,266 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:14,267 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,268 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8 empty. 2023-07-18 19:15:14,268 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,268 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 19:15:14,282 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:14,284 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 77e6d5a7493d3a9d5ff26a5f498a28d8, NAME => 'hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp 2023-07-18 19:15:14,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:14,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 77e6d5a7493d3a9d5ff26a5f498a28d8, disabling compactions & flushes 2023-07-18 19:15:14,294 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:14,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:14,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. after waiting 0 ms 2023-07-18 19:15:14,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:14,294 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:14,294 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 77e6d5a7493d3a9d5ff26a5f498a28d8: 2023-07-18 19:15:14,296 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:14,297 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707714297"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707714297"}]},"ts":"1689707714297"} 2023-07-18 19:15:14,299 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:15:14,299 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:14,299 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707714299"}]},"ts":"1689707714299"} 2023-07-18 19:15:14,300 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 19:15:14,303 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:14,304 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:14,304 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:14,304 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:14,304 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:14,304 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=77e6d5a7493d3a9d5ff26a5f498a28d8, ASSIGN}] 2023-07-18 19:15:14,305 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=77e6d5a7493d3a9d5ff26a5f498a28d8, ASSIGN 2023-07-18 19:15:14,307 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=77e6d5a7493d3a9d5ff26a5f498a28d8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46825,1689707713423; forceNewPlan=false, retain=false 2023-07-18 19:15:14,307 INFO [jenkins-hbase4:43365] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 19:15:14,309 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b9272c44f2ba649af542994b09338576, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:14,309 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707714309"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707714309"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707714309"}]},"ts":"1689707714309"} 2023-07-18 19:15:14,310 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=77e6d5a7493d3a9d5ff26a5f498a28d8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:14,310 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707714310"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707714310"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707714310"}]},"ts":"1689707714310"} 2023-07-18 19:15:14,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure b9272c44f2ba649af542994b09338576, server=jenkins-hbase4.apache.org,38221,1689707713460}] 2023-07-18 19:15:14,311 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 77e6d5a7493d3a9d5ff26a5f498a28d8, server=jenkins-hbase4.apache.org,46825,1689707713423}] 2023-07-18 19:15:14,464 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:14,464 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:15:14,465 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48134, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:15:14,467 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:14,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 77e6d5a7493d3a9d5ff26a5f498a28d8, NAME => 'hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:14,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 19:15:14,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. service=MultiRowMutationService 2023-07-18 19:15:14,468 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 19:15:14,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:14,468 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:14,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,468 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b9272c44f2ba649af542994b09338576, NAME => 'hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:14,469 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,469 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:14,469 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,469 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,469 INFO [StoreOpener-77e6d5a7493d3a9d5ff26a5f498a28d8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,470 INFO [StoreOpener-b9272c44f2ba649af542994b09338576-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,471 DEBUG [StoreOpener-77e6d5a7493d3a9d5ff26a5f498a28d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8/m 2023-07-18 19:15:14,471 DEBUG [StoreOpener-77e6d5a7493d3a9d5ff26a5f498a28d8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8/m 2023-07-18 19:15:14,471 DEBUG [StoreOpener-b9272c44f2ba649af542994b09338576-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576/info 2023-07-18 19:15:14,471 DEBUG [StoreOpener-b9272c44f2ba649af542994b09338576-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576/info 2023-07-18 19:15:14,471 INFO [StoreOpener-77e6d5a7493d3a9d5ff26a5f498a28d8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77e6d5a7493d3a9d5ff26a5f498a28d8 columnFamilyName m 2023-07-18 19:15:14,471 INFO [StoreOpener-b9272c44f2ba649af542994b09338576-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b9272c44f2ba649af542994b09338576 columnFamilyName info 2023-07-18 19:15:14,472 INFO [StoreOpener-b9272c44f2ba649af542994b09338576-1] regionserver.HStore(310): Store=b9272c44f2ba649af542994b09338576/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:14,472 INFO [StoreOpener-77e6d5a7493d3a9d5ff26a5f498a28d8-1] regionserver.HStore(310): Store=77e6d5a7493d3a9d5ff26a5f498a28d8/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:14,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b9272c44f2ba649af542994b09338576 2023-07-18 19:15:14,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:14,478 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:14,480 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 77e6d5a7493d3a9d5ff26a5f498a28d8; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@6fa52dc1, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:14,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:14,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 77e6d5a7493d3a9d5ff26a5f498a28d8: 2023-07-18 19:15:14,480 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b9272c44f2ba649af542994b09338576; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10336280800, jitterRate=-0.037358835339546204}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:14,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b9272c44f2ba649af542994b09338576: 2023-07-18 19:15:14,480 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8., pid=9, masterSystemTime=1689707714464 2023-07-18 19:15:14,539 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576., pid=8, masterSystemTime=1689707714464 2023-07-18 19:15:14,542 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:14,543 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:14,547 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=77e6d5a7493d3a9d5ff26a5f498a28d8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:14,547 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:14,547 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689707714547"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707714547"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707714547"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707714547"}]},"ts":"1689707714547"} 2023-07-18 19:15:14,548 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:14,551 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b9272c44f2ba649af542994b09338576, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:14,551 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689707714551"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707714551"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707714551"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707714551"}]},"ts":"1689707714551"} 2023-07-18 19:15:14,552 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 19:15:14,552 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 77e6d5a7493d3a9d5ff26a5f498a28d8, server=jenkins-hbase4.apache.org,46825,1689707713423 in 238 msec 2023-07-18 19:15:14,558 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-18 19:15:14,558 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=77e6d5a7493d3a9d5ff26a5f498a28d8, ASSIGN in 248 msec 2023-07-18 19:15:14,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-18 19:15:14,559 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure b9272c44f2ba649af542994b09338576, server=jenkins-hbase4.apache.org,38221,1689707713460 in 242 msec 2023-07-18 19:15:14,559 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:14,560 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707714559"}]},"ts":"1689707714559"} 2023-07-18 19:15:14,561 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 19:15:14,562 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 19:15:14,562 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b9272c44f2ba649af542994b09338576, ASSIGN in 358 msec 2023-07-18 19:15:14,563 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:14,563 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707714563"}]},"ts":"1689707714563"} 2023-07-18 19:15:14,571 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 19:15:14,571 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:14,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 311 msec 2023-07-18 19:15:14,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 19:15:14,575 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:14,576 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:14,577 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:14,578 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 414 msec 2023-07-18 19:15:14,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:15:14,582 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48146, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:15:14,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 19:15:14,594 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:14,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-18 19:15:14,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 19:15:14,613 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:14,615 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-18 19:15:14,621 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 19:15:14,623 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 19:15:14,623 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.116sec 2023-07-18 19:15:14,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 19:15:14,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 19:15:14,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 19:15:14,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43365,1689707713306-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 19:15:14,624 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43365,1689707713306-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 19:15:14,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 19:15:14,640 DEBUG [Listener at localhost/39045] zookeeper.ReadOnlyZKClient(139): Connect 0x4c13588f to 127.0.0.1:55220 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:14,646 DEBUG [Listener at localhost/39045] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a4391df, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:14,648 DEBUG [hconnection-0x213b3a83-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:15:14,650 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:15:14,651 INFO [Listener at localhost/39045] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:14,651 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:14,666 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 19:15:14,666 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 19:15:14,672 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:14,672 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:14,673 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 19:15:14,674 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 19:15:14,755 DEBUG [Listener at localhost/39045] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 19:15:14,757 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43328, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 19:15:14,761 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 19:15:14,761 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:14,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 19:15:14,763 DEBUG [Listener at localhost/39045] zookeeper.ReadOnlyZKClient(139): Connect 0x7c6f260e to 127.0.0.1:55220 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:14,767 DEBUG [Listener at localhost/39045] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@478f422, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:14,768 INFO [Listener at localhost/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:55220 2023-07-18 19:15:14,771 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:14,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10179dc1607000a connected 2023-07-18 19:15:14,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:14,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:14,779 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 19:15:14,791 INFO [Listener at localhost/39045] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 19:15:14,791 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:14,792 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:14,792 INFO [Listener at localhost/39045] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 19:15:14,792 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 19:15:14,792 INFO [Listener at localhost/39045] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 19:15:14,792 INFO [Listener at localhost/39045] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 19:15:14,793 INFO [Listener at localhost/39045] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44507 2023-07-18 19:15:14,793 INFO [Listener at localhost/39045] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 19:15:14,796 DEBUG [Listener at localhost/39045] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 19:15:14,796 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:14,797 INFO [Listener at localhost/39045] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 19:15:14,798 INFO [Listener at localhost/39045] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44507 connecting to ZooKeeper ensemble=127.0.0.1:55220 2023-07-18 19:15:14,803 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:445070x0, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 19:15:14,807 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(162): regionserver:445070x0, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 19:15:14,807 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44507-0x10179dc1607000b connected 2023-07-18 19:15:14,808 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(162): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 19:15:14,808 DEBUG [Listener at localhost/39045] zookeeper.ZKUtil(164): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 19:15:14,809 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44507 2023-07-18 19:15:14,809 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44507 2023-07-18 19:15:14,810 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44507 2023-07-18 19:15:14,814 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44507 2023-07-18 19:15:14,814 DEBUG [Listener at localhost/39045] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44507 2023-07-18 19:15:14,816 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 19:15:14,816 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 19:15:14,816 INFO [Listener at localhost/39045] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 19:15:14,816 INFO [Listener at localhost/39045] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 19:15:14,816 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 19:15:14,816 INFO [Listener at localhost/39045] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 19:15:14,816 INFO [Listener at localhost/39045] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 19:15:14,817 INFO [Listener at localhost/39045] http.HttpServer(1146): Jetty bound to port 45011 2023-07-18 19:15:14,817 INFO [Listener at localhost/39045] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 19:15:14,819 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:14,819 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5acf0cd7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,AVAILABLE} 2023-07-18 19:15:14,819 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:14,819 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c632bc4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 19:15:14,825 INFO [Listener at localhost/39045] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 19:15:14,826 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 19:15:14,826 INFO [Listener at localhost/39045] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 19:15:14,826 INFO [Listener at localhost/39045] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 19:15:14,827 INFO [Listener at localhost/39045] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 19:15:14,827 INFO [Listener at localhost/39045] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@21e0cde3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:14,829 INFO [Listener at localhost/39045] server.AbstractConnector(333): Started ServerConnector@4bd213f4{HTTP/1.1, (http/1.1)}{0.0.0.0:45011} 2023-07-18 19:15:14,829 INFO [Listener at localhost/39045] server.Server(415): Started @43532ms 2023-07-18 19:15:14,832 INFO [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(951): ClusterId : 0b529690-565c-468f-a912-a07c087b6e62 2023-07-18 19:15:14,835 DEBUG [RS:3;jenkins-hbase4:44507] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 19:15:14,838 DEBUG [RS:3;jenkins-hbase4:44507] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 19:15:14,838 DEBUG [RS:3;jenkins-hbase4:44507] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 19:15:14,840 DEBUG [RS:3;jenkins-hbase4:44507] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 19:15:14,841 DEBUG [RS:3;jenkins-hbase4:44507] zookeeper.ReadOnlyZKClient(139): Connect 0x43293427 to 127.0.0.1:55220 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 19:15:14,845 DEBUG [RS:3;jenkins-hbase4:44507] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4deec232, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 19:15:14,846 DEBUG [RS:3;jenkins-hbase4:44507] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49d15f57, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:14,854 DEBUG [RS:3;jenkins-hbase4:44507] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:44507 2023-07-18 19:15:14,854 INFO [RS:3;jenkins-hbase4:44507] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 19:15:14,854 INFO [RS:3;jenkins-hbase4:44507] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 19:15:14,854 DEBUG [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 19:15:14,855 INFO [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43365,1689707713306 with isa=jenkins-hbase4.apache.org/172.31.14.131:44507, startcode=1689707714791 2023-07-18 19:15:14,855 DEBUG [RS:3;jenkins-hbase4:44507] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 19:15:14,857 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60129, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 19:15:14,858 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43365] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,858 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 19:15:14,858 DEBUG [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884 2023-07-18 19:15:14,858 DEBUG [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:38571 2023-07-18 19:15:14,858 DEBUG [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43377 2023-07-18 19:15:14,864 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:14,864 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:14,864 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:14,864 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:14,864 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:14,865 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 19:15:14,865 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44507,1689707714791] 2023-07-18 19:15:14,865 DEBUG [RS:3;jenkins-hbase4:44507] zookeeper.ZKUtil(162): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,865 WARN [RS:3;jenkins-hbase4:44507] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 19:15:14,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:14,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:14,865 INFO [RS:3;jenkins-hbase4:44507] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 19:15:14,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:14,865 DEBUG [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1948): logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,866 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 19:15:14,866 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:14,866 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:14,866 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:14,867 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:14,867 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:14,867 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:14,868 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,868 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,868 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,871 DEBUG [RS:3;jenkins-hbase4:44507] zookeeper.ZKUtil(162): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:14,871 DEBUG [RS:3;jenkins-hbase4:44507] zookeeper.ZKUtil(162): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:14,871 DEBUG [RS:3;jenkins-hbase4:44507] zookeeper.ZKUtil(162): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:14,872 DEBUG [RS:3;jenkins-hbase4:44507] zookeeper.ZKUtil(162): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,873 DEBUG [RS:3;jenkins-hbase4:44507] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 19:15:14,873 INFO [RS:3;jenkins-hbase4:44507] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 19:15:14,879 INFO [RS:3;jenkins-hbase4:44507] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 19:15:14,879 INFO [RS:3;jenkins-hbase4:44507] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 19:15:14,879 INFO [RS:3;jenkins-hbase4:44507] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,879 INFO [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 19:15:14,881 INFO [RS:3;jenkins-hbase4:44507] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,882 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,882 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,883 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,883 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,883 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,883 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 19:15:14,883 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,883 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,883 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,883 DEBUG [RS:3;jenkins-hbase4:44507] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 19:15:14,886 INFO [RS:3;jenkins-hbase4:44507] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,887 INFO [RS:3;jenkins-hbase4:44507] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,887 INFO [RS:3;jenkins-hbase4:44507] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,901 INFO [RS:3;jenkins-hbase4:44507] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 19:15:14,901 INFO [RS:3;jenkins-hbase4:44507] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44507,1689707714791-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 19:15:14,913 INFO [RS:3;jenkins-hbase4:44507] regionserver.Replication(203): jenkins-hbase4.apache.org,44507,1689707714791 started 2023-07-18 19:15:14,913 INFO [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44507,1689707714791, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44507, sessionid=0x10179dc1607000b 2023-07-18 19:15:14,913 DEBUG [RS:3;jenkins-hbase4:44507] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 19:15:14,913 DEBUG [RS:3;jenkins-hbase4:44507] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,913 DEBUG [RS:3;jenkins-hbase4:44507] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44507,1689707714791' 2023-07-18 19:15:14,913 DEBUG [RS:3;jenkins-hbase4:44507] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 19:15:14,914 DEBUG [RS:3;jenkins-hbase4:44507] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 19:15:14,914 DEBUG [RS:3;jenkins-hbase4:44507] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 19:15:14,914 DEBUG [RS:3;jenkins-hbase4:44507] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 19:15:14,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:14,914 DEBUG [RS:3;jenkins-hbase4:44507] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:14,914 DEBUG [RS:3;jenkins-hbase4:44507] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44507,1689707714791' 2023-07-18 19:15:14,915 DEBUG [RS:3;jenkins-hbase4:44507] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 19:15:14,915 DEBUG [RS:3;jenkins-hbase4:44507] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 19:15:14,915 DEBUG [RS:3;jenkins-hbase4:44507] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 19:15:14,916 INFO [RS:3;jenkins-hbase4:44507] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 19:15:14,916 INFO [RS:3;jenkins-hbase4:44507] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 19:15:14,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:14,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:14,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:14,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:14,923 DEBUG [hconnection-0x13a4782d-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 19:15:14,924 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45300, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 19:15:14,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:14,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:14,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:14,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:14,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:43328 deadline: 1689708914933, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:14,934 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:14,935 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:14,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:14,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:14,936 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:14,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:14,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:14,997 INFO [Listener at localhost/39045] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=566 (was 527) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2024729879-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2124302421_17 at /127.0.0.1:52544 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 39045 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1545259874-2301 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@6a65237b java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x0254b24f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1220743267-2333 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:38571 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x13a4782d-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 2 on default port 43919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: M:0;jenkins-hbase4:43365 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884-prefix:jenkins-hbase4.apache.org,46825,1689707713423 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:37601 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:37601 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x147e08f4-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 39045 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x0254b24f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 39045 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 159805897@qtp-1982738612-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46743 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:37601 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2024729879-2273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1220743267-2338 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(984214216) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Parameter Sending Thread #2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data3/current/BP-418551092-172.31.14.131-1689707712498 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp735681169-2639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2124302421_17 at /127.0.0.1:55308 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 39045 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1220743267-2335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46825 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 40771 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2024729879-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2024729879-2270 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_583590140_17 at /127.0.0.1:55258 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 38571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x147e08f4-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1139020458-2373 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1366183372@qtp-294889135-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/39045-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp735681169-2637 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 38571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x23953b41 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:38571 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp735681169-2643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data1/current/BP-418551092-172.31.14.131-1689707712498 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_583590140_17 at /127.0.0.1:43508 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x147e08f4-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x665b7fd1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData-prefix:jenkins-hbase4.apache.org,43365,1689707713306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_583590140_17 at /127.0.0.1:43492 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45101-SendThread(127.0.0.1:59566) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data4/current/BP-418551092-172.31.14.131-1689707712498 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2c26aa2e sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@5cc409d7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:38571 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: jenkins-hbase4:46825Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1200437897_17 at /127.0.0.1:52530 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@193b0903[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_583590140_17 at /127.0.0.1:52496 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp1139020458-2378 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2024729879-2275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:38571 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Session-HouseKeeper-127e508f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6ff6de2e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x1e751761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42481,1689707708157 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x23953b41-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/39045-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@362de72e[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 450274285@qtp-103208499-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39115 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 39045 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1200437897_17 at /127.0.0.1:55288 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1220743267-2331 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/39045-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 39045 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2124302421_17 at /127.0.0.1:52540 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2037186303-2367 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1545259874-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:55220): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp2024729879-2271-acceptor-0@6554bb59-ServerConnector@f1a837d{HTTP/1.1, (http/1.1)}{0.0.0.0:43377} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1200437897_17 at /127.0.0.1:52470 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:37601 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:38221-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:37601 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:38571 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x7c6f260e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13a4782d-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 275301875@qtp-103208499-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1545259874-2304 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 481316119@qtp-1982738612-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 4 on default port 40771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:44507 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2037186303-2362-acceptor-0@23dcfeb7-ServerConnector@38def70b{HTTP/1.1, (http/1.1)}{0.0.0.0:46109} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2024729879-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1220743267-2332-acceptor-0@7873b656-ServerConnector@5aabd8da{HTTP/1.1, (http/1.1)}{0.0.0.0:42239} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@36c8a553 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x665b7fd1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x23953b41-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1220743267-2334 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1139020458-2374 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43365,1689707713306 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp2037186303-2363 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp735681169-2638-acceptor-0@204335d-ServerConnector@4bd213f4{HTTP/1.1, (http/1.1)}{0.0.0.0:45011} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1545259874-2305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1545259874-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x1e751761-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x147e08f4-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:37601 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-4c4e2bc5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1545259874-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-775fd6ef-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x7c6f260e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data6/current/BP-418551092-172.31.14.131-1689707712498 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:38221Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: globalEventExecutor-1-3 java.lang.Thread.currentThread(Native Method) io.netty.util.internal.InternalThreadLocalMap.get(InternalThreadLocalMap.java:69) io.netty.util.concurrent.FastThreadLocal.set(FastThreadLocal.java:192) io.netty.util.internal.ThreadExecutorMap.setCurrentEventExecutor(ThreadExecutorMap.java:44) io.netty.util.internal.ThreadExecutorMap.access$000(ThreadExecutorMap.java:27) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:76) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:1;jenkins-hbase4:46825-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-571-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp735681169-2640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x147e08f4-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp735681169-2641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 43919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x147e08f4-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-27362306_17 at /127.0.0.1:43538 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 38571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1139020458-2372 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 38571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45101-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884-prefix:jenkins-hbase4.apache.org,38221,1689707713460 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59566@0x1cd96770 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x665b7fd1-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 40771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1314932742@qtp-1373768995-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45673 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2024729879-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707713666 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: qtp1139020458-2375 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707713666 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x4c13588f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:38571 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:37601 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:38571 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-2cfcc8e9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1220743267-2336 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1139020458-2377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x213b3a83-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884-prefix:jenkins-hbase4.apache.org,46825,1689707713423.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 351431247@qtp-1373768995-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1220743267-2337 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@dc55ac7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1d7546a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x4c13588f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x147e08f4-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42899Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1398490559@qtp-294889135-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44253 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS:3;jenkins-hbase4:44507-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x4c13588f-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:2;jenkins-hbase4:38221 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x0254b24f-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2124302421_17 at /127.0.0.1:43552 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@f5d6833 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2037186303-2366 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:37601 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-27362306_17 at /127.0.0.1:55284 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 43919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x43293427-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:37601 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-27362306_17 at /127.0.0.1:52528 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1139020458-2376-acceptor-0@10f13ee9-ServerConnector@4d23f4a6{HTTP/1.1, (http/1.1)}{0.0.0.0:39707} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2037186303-2365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp735681169-2644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@702aad4d java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x7c6f260e-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2124302421_17 at /127.0.0.1:55290 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data5/current/BP-418551092-172.31.14.131-1689707712498 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2037186303-2361 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/719681942.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@3a181017[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884-prefix:jenkins-hbase4.apache.org,42899,1689707713380 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/39045-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 40771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2037186303-2368 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:38571 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1139020458-2379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@50d72f0c java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:38571 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x43293427-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 3 on default port 40771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59566@0x1cd96770-SendThread(127.0.0.1:59566) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: Listener at localhost/39045-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server idle connection scanner for port 38571 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x1e751761-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1545259874-2302-acceptor-0@33fe585c-ServerConnector@174373f0{HTTP/1.1, (http/1.1)}{0.0.0.0:41441} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43919 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1200437897_17 at /127.0.0.1:43544 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1545259874-2303 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:55220 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@535847f0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@57ddebd8 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 43919 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2124302421_17 at /127.0.0.1:43548 [Receiving block BP-418551092-172.31.14.131-1689707712498:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data2/current/BP-418551092-172.31.14.131-1689707712498 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55220@0x43293427 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1433986440.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@63fac8c9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@1b6330d1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x147e08f4-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS:0;jenkins-hbase4:42899 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39045-SendThread(127.0.0.1:55220) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:44507Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2037186303-2364 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 38571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2124302421_17 at /127.0.0.1:55306 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/39045.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59566@0x1cd96770-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp735681169-2642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44507 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/39045.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-418551092-172.31.14.131-1689707712498:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:42899-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1726712443) connection to localhost/127.0.0.1:38571 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=840 (was 817) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=324 (was 336), ProcessCount=173 (was 173), AvailableMemoryMB=2551 (was 2872) 2023-07-18 19:15:14,999 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=566 is superior to 500 2023-07-18 19:15:15,017 INFO [RS:3;jenkins-hbase4:44507] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44507%2C1689707714791, suffix=, logDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,44507,1689707714791, archiveDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs, maxLogs=32 2023-07-18 19:15:15,020 INFO [Listener at localhost/39045] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=565, OpenFileDescriptor=840, MaxFileDescriptor=60000, SystemLoadAverage=324, ProcessCount=173, AvailableMemoryMB=2550 2023-07-18 19:15:15,020 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=565 is superior to 500 2023-07-18 19:15:15,020 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-18 19:15:15,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:15,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:15,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:15,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:15,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:15,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:15,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:15,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:15,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:15,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:15,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:15,036 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK] 2023-07-18 19:15:15,037 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK] 2023-07-18 19:15:15,037 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK] 2023-07-18 19:15:15,039 INFO [RS:3;jenkins-hbase4:44507] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,44507,1689707714791/jenkins-hbase4.apache.org%2C44507%2C1689707714791.1689707715018 2023-07-18 19:15:15,039 DEBUG [RS:3;jenkins-hbase4:44507] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35167,DS-9e0fc474-ea19-4fee-8e06-03fcc0c9d062,DISK], DatanodeInfoWithStorage[127.0.0.1:37693,DS-4c592f60-35b9-425f-b4c5-280fa2071c1d,DISK], DatanodeInfoWithStorage[127.0.0.1:33529,DS-d8354f95-bd07-49b3-abaa-9e106c0f93d0,DISK]] 2023-07-18 19:15:15,040 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:15,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:15,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:15,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:15,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:15,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:15,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:15,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:15,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:15,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:15,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:43328 deadline: 1689708915049, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:15,050 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:15,051 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:15,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:15,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:15,052 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:15,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:15,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:15,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:15,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 19:15:15,057 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:15,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-18 19:15:15,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 19:15:15,059 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:15,059 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:15,059 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:15,063 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 19:15:15,064 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,064 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf empty. 2023-07-18 19:15:15,065 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,065 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 19:15:15,076 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-18 19:15:15,077 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2d4326b76bd2f9dc49d98a1bba8f25bf, NAME => 't1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp 2023-07-18 19:15:15,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:15,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 2d4326b76bd2f9dc49d98a1bba8f25bf, disabling compactions & flushes 2023-07-18 19:15:15,084 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. after waiting 0 ms 2023-07-18 19:15:15,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,084 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 2d4326b76bd2f9dc49d98a1bba8f25bf: 2023-07-18 19:15:15,086 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 19:15:15,087 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707715087"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707715087"}]},"ts":"1689707715087"} 2023-07-18 19:15:15,088 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 19:15:15,089 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 19:15:15,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707715089"}]},"ts":"1689707715089"} 2023-07-18 19:15:15,090 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-18 19:15:15,092 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 19:15:15,093 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 19:15:15,093 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 19:15:15,093 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 19:15:15,093 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 19:15:15,093 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 19:15:15,093 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=2d4326b76bd2f9dc49d98a1bba8f25bf, ASSIGN}] 2023-07-18 19:15:15,094 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=2d4326b76bd2f9dc49d98a1bba8f25bf, ASSIGN 2023-07-18 19:15:15,094 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=2d4326b76bd2f9dc49d98a1bba8f25bf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44507,1689707714791; forceNewPlan=false, retain=false 2023-07-18 19:15:15,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 19:15:15,244 INFO [jenkins-hbase4:43365] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 19:15:15,246 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2d4326b76bd2f9dc49d98a1bba8f25bf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:15,246 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707715246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707715246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707715246"}]},"ts":"1689707715246"} 2023-07-18 19:15:15,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 2d4326b76bd2f9dc49d98a1bba8f25bf, server=jenkins-hbase4.apache.org,44507,1689707714791}] 2023-07-18 19:15:15,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 19:15:15,401 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:15,401 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 19:15:15,402 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47770, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 19:15:15,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2d4326b76bd2f9dc49d98a1bba8f25bf, NAME => 't1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.', STARTKEY => '', ENDKEY => ''} 2023-07-18 19:15:15,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 19:15:15,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,407 INFO [StoreOpener-2d4326b76bd2f9dc49d98a1bba8f25bf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,409 DEBUG [StoreOpener-2d4326b76bd2f9dc49d98a1bba8f25bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf/cf1 2023-07-18 19:15:15,409 DEBUG [StoreOpener-2d4326b76bd2f9dc49d98a1bba8f25bf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf/cf1 2023-07-18 19:15:15,409 INFO [StoreOpener-2d4326b76bd2f9dc49d98a1bba8f25bf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2d4326b76bd2f9dc49d98a1bba8f25bf columnFamilyName cf1 2023-07-18 19:15:15,410 INFO [StoreOpener-2d4326b76bd2f9dc49d98a1bba8f25bf-1] regionserver.HStore(310): Store=2d4326b76bd2f9dc49d98a1bba8f25bf/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 19:15:15,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 19:15:15,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2d4326b76bd2f9dc49d98a1bba8f25bf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10086522720, jitterRate=-0.06061936914920807}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 19:15:15,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2d4326b76bd2f9dc49d98a1bba8f25bf: 2023-07-18 19:15:15,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf., pid=14, masterSystemTime=1689707715400 2023-07-18 19:15:15,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,423 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2d4326b76bd2f9dc49d98a1bba8f25bf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:15,423 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707715423"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689707715423"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689707715423"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689707715423"}]},"ts":"1689707715423"} 2023-07-18 19:15:15,427 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-18 19:15:15,427 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 2d4326b76bd2f9dc49d98a1bba8f25bf, server=jenkins-hbase4.apache.org,44507,1689707714791 in 177 msec 2023-07-18 19:15:15,428 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 19:15:15,429 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=2d4326b76bd2f9dc49d98a1bba8f25bf, ASSIGN in 334 msec 2023-07-18 19:15:15,429 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 19:15:15,429 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707715429"}]},"ts":"1689707715429"} 2023-07-18 19:15:15,430 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-18 19:15:15,432 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 19:15:15,434 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 377 msec 2023-07-18 19:15:15,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 19:15:15,661 INFO [Listener at localhost/39045] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-18 19:15:15,661 DEBUG [Listener at localhost/39045] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-18 19:15:15,661 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:15,663 INFO [Listener at localhost/39045] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-18 19:15:15,664 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:15,664 INFO [Listener at localhost/39045] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-18 19:15:15,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 19:15:15,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 19:15:15,668 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 19:15:15,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-18 19:15:15,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:43328 deadline: 1689707775665, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-18 19:15:15,670 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:15,671 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-18 19:15:15,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:15,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:15,772 INFO [Listener at localhost/39045] client.HBaseAdmin$15(890): Started disable of t1 2023-07-18 19:15:15,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-18 19:15:15,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-18 19:15:15,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 19:15:15,776 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707715776"}]},"ts":"1689707715776"} 2023-07-18 19:15:15,777 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-18 19:15:15,779 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-18 19:15:15,781 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=2d4326b76bd2f9dc49d98a1bba8f25bf, UNASSIGN}] 2023-07-18 19:15:15,781 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=2d4326b76bd2f9dc49d98a1bba8f25bf, UNASSIGN 2023-07-18 19:15:15,782 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=2d4326b76bd2f9dc49d98a1bba8f25bf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:15,782 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707715782"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689707715782"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689707715782"}]},"ts":"1689707715782"} 2023-07-18 19:15:15,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 2d4326b76bd2f9dc49d98a1bba8f25bf, server=jenkins-hbase4.apache.org,44507,1689707714791}] 2023-07-18 19:15:15,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 19:15:15,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2d4326b76bd2f9dc49d98a1bba8f25bf, disabling compactions & flushes 2023-07-18 19:15:15,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. after waiting 0 ms 2023-07-18 19:15:15,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 19:15:15,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf. 2023-07-18 19:15:15,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2d4326b76bd2f9dc49d98a1bba8f25bf: 2023-07-18 19:15:15,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:15,941 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=2d4326b76bd2f9dc49d98a1bba8f25bf, regionState=CLOSED 2023-07-18 19:15:15,942 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689707715941"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689707715941"}]},"ts":"1689707715941"} 2023-07-18 19:15:15,944 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 19:15:15,944 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 2d4326b76bd2f9dc49d98a1bba8f25bf, server=jenkins-hbase4.apache.org,44507,1689707714791 in 160 msec 2023-07-18 19:15:15,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 19:15:15,947 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=2d4326b76bd2f9dc49d98a1bba8f25bf, UNASSIGN in 163 msec 2023-07-18 19:15:15,948 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689707715948"}]},"ts":"1689707715948"} 2023-07-18 19:15:15,949 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-18 19:15:15,950 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-18 19:15:15,956 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 183 msec 2023-07-18 19:15:16,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 19:15:16,078 INFO [Listener at localhost/39045] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-18 19:15:16,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-18 19:15:16,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-18 19:15:16,081 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 19:15:16,081 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-18 19:15:16,081 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-18 19:15:16,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:16,084 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:16,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 19:15:16,086 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf/cf1, FileablePath, hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf/recovered.edits] 2023-07-18 19:15:16,091 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf/recovered.edits/4.seqid to hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/archive/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf/recovered.edits/4.seqid 2023-07-18 19:15:16,091 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/.tmp/data/default/t1/2d4326b76bd2f9dc49d98a1bba8f25bf 2023-07-18 19:15:16,091 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 19:15:16,093 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-18 19:15:16,095 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-18 19:15:16,096 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-18 19:15:16,097 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-18 19:15:16,097 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-18 19:15:16,098 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689707716097"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:16,099 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 19:15:16,099 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2d4326b76bd2f9dc49d98a1bba8f25bf, NAME => 't1,,1689707715054.2d4326b76bd2f9dc49d98a1bba8f25bf.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 19:15:16,099 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-18 19:15:16,099 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689707716099"}]},"ts":"9223372036854775807"} 2023-07-18 19:15:16,100 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-18 19:15:16,102 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 19:15:16,103 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 24 msec 2023-07-18 19:15:16,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 19:15:16,187 INFO [Listener at localhost/39045] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-18 19:15:16,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:16,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:16,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:16,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:16,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:16,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:16,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:16,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,216 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:16,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:16,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:16,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:16,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:43328 deadline: 1689708916242, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:16,243 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:16,247 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:16,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,248 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:16,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:16,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:16,269 INFO [Listener at localhost/39045] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=573 (was 565) - Thread LEAK? -, OpenFileDescriptor=842 (was 840) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=324 (was 324), ProcessCount=173 (was 173), AvailableMemoryMB=2556 (was 2550) - AvailableMemoryMB LEAK? - 2023-07-18 19:15:16,269 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 19:15:16,286 INFO [Listener at localhost/39045] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=324, ProcessCount=173, AvailableMemoryMB=2555 2023-07-18 19:15:16,286 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 19:15:16,286 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-18 19:15:16,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:16,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:16,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:16,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:16,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:16,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:16,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:16,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,298 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:16,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:16,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:16,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:16,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:43328 deadline: 1689708916306, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:16,307 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:16,309 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:16,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,309 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:16,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:16,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:16,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 19:15:16,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:16,312 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-18 19:15:16,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 19:15:16,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 19:15:16,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:16,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:16,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:16,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:16,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:16,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:16,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:16,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,332 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:16,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:16,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:16,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:16,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:43328 deadline: 1689708916344, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:16,345 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:16,346 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:16,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,348 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:16,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:16,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:16,377 INFO [Listener at localhost/39045] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=575 (was 573) - Thread LEAK? -, OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=324 (was 324), ProcessCount=173 (was 173), AvailableMemoryMB=2557 (was 2555) - AvailableMemoryMB LEAK? - 2023-07-18 19:15:16,378 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-18 19:15:16,398 INFO [Listener at localhost/39045] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=324, ProcessCount=173, AvailableMemoryMB=2558 2023-07-18 19:15:16,398 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-18 19:15:16,399 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-18 19:15:16,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:16,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:16,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:16,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:16,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:16,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:16,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:16,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,414 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:16,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:16,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:16,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:16,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:43328 deadline: 1689708916427, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:16,428 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:16,430 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:16,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,431 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:16,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:16,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:16,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:16,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:16,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:16,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:16,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:16,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:16,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:16,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,447 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:16,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:16,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:16,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:16,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:43328 deadline: 1689708916460, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:16,461 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:16,463 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:16,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,464 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:16,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:16,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:16,482 INFO [Listener at localhost/39045] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576 (was 575) - Thread LEAK? -, OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=324 (was 324), ProcessCount=173 (was 173), AvailableMemoryMB=2558 (was 2558) 2023-07-18 19:15:16,482 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-18 19:15:16,499 INFO [Listener at localhost/39045] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576, OpenFileDescriptor=842, MaxFileDescriptor=60000, SystemLoadAverage=324, ProcessCount=173, AvailableMemoryMB=2557 2023-07-18 19:15:16,499 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-18 19:15:16,499 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-18 19:15:16,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:16,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:16,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:16,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:16,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:16,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:16,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:16,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,512 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:16,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:16,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:16,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:16,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:43328 deadline: 1689708916521, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:16,522 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:16,523 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:16,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,524 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:16,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:16,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:16,525 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-18 19:15:16,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-18 19:15:16,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 19:15:16,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 19:15:16,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 19:15:16,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-18 19:15:16,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 19:15:16,542 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:16,545 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-18 19:15:16,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 19:15:16,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 19:15:16,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:43328 deadline: 1689708916640, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-18 19:15:16,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 19:15:16,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-18 19:15:16,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 19:15:16,663 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 19:15:16,664 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-18 19:15:16,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 19:15:16,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-18 19:15:16,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 19:15:16,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 19:15:16,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 19:15:16,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-18 19:15:16,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 19:15:16,779 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 19:15:16,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 19:15:16,783 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 19:15:16,784 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 19:15:16,785 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 19:15:16,785 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 19:15:16,786 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 19:15:16,788 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 19:15:16,789 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-18 19:15:16,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 19:15:16,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 19:15:16,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 19:15:16,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 19:15:16,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:43328 deadline: 1689707776893, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-18 19:15:16,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:16,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:16,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:16,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:16,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:16,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-18 19:15:16,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 19:15:16,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 19:15:16,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 19:15:16,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 19:15:16,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 19:15:16,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 19:15:16,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 19:15:16,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 19:15:16,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 19:15:16,911 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 19:15:16,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 19:15:16,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 19:15:16,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 19:15:16,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 19:15:16,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 19:15:16,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43365] to rsgroup master 2023-07-18 19:15:16,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 19:15:16,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:43328 deadline: 1689708916919, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. 2023-07-18 19:15:16,919 WARN [Listener at localhost/39045] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:43365 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 19:15:16,921 INFO [Listener at localhost/39045] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 19:15:16,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 19:15:16,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 19:15:16,922 INFO [Listener at localhost/39045] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:38221, jenkins-hbase4.apache.org:42899, jenkins-hbase4.apache.org:44507, jenkins-hbase4.apache.org:46825], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 19:15:16,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 19:15:16,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43365] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 19:15:16,943 INFO [Listener at localhost/39045] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=576 (was 576), OpenFileDescriptor=842 (was 842), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=324 (was 324), ProcessCount=173 (was 173), AvailableMemoryMB=2571 (was 2557) - AvailableMemoryMB LEAK? - 2023-07-18 19:15:16,943 WARN [Listener at localhost/39045] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-18 19:15:16,943 INFO [Listener at localhost/39045] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 19:15:16,943 INFO [Listener at localhost/39045] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 19:15:16,943 DEBUG [Listener at localhost/39045] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4c13588f to 127.0.0.1:55220 2023-07-18 19:15:16,943 DEBUG [Listener at localhost/39045] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:16,943 DEBUG [Listener at localhost/39045] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 19:15:16,943 DEBUG [Listener at localhost/39045] util.JVMClusterUtil(257): Found active master hash=153293241, stopped=false 2023-07-18 19:15:16,944 DEBUG [Listener at localhost/39045] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 19:15:16,944 DEBUG [Listener at localhost/39045] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 19:15:16,944 INFO [Listener at localhost/39045] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:16,947 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:16,947 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:16,947 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:16,947 INFO [Listener at localhost/39045] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 19:15:16,947 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:16,947 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 19:15:16,947 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:16,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:16,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:16,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:16,948 DEBUG [Listener at localhost/39045] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1e751761 to 127.0.0.1:55220 2023-07-18 19:15:16,948 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:16,948 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 19:15:16,948 DEBUG [Listener at localhost/39045] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:16,948 INFO [Listener at localhost/39045] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42899,1689707713380' ***** 2023-07-18 19:15:16,948 INFO [Listener at localhost/39045] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:16,948 INFO [Listener at localhost/39045] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46825,1689707713423' ***** 2023-07-18 19:15:16,948 INFO [Listener at localhost/39045] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:16,948 INFO [Listener at localhost/39045] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38221,1689707713460' ***** 2023-07-18 19:15:16,948 INFO [Listener at localhost/39045] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:16,948 INFO [Listener at localhost/39045] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44507,1689707714791' ***** 2023-07-18 19:15:16,948 INFO [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:16,948 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:16,948 INFO [Listener at localhost/39045] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 19:15:16,948 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:16,949 INFO [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:16,953 INFO [RS:0;jenkins-hbase4:42899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@11bb793a{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:16,953 INFO [RS:1;jenkins-hbase4:46825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@14d5be5b{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:16,955 INFO [RS:0;jenkins-hbase4:42899] server.AbstractConnector(383): Stopped ServerConnector@174373f0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:16,955 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:16,955 INFO [RS:0;jenkins-hbase4:42899] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:16,955 INFO [RS:1;jenkins-hbase4:46825] server.AbstractConnector(383): Stopped ServerConnector@5aabd8da{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:16,955 INFO [RS:2;jenkins-hbase4:38221] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4fdc91ed{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:16,955 INFO [RS:3;jenkins-hbase4:44507] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@21e0cde3{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 19:15:16,956 INFO [RS:0;jenkins-hbase4:42899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4185f02{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:16,956 INFO [RS:1;jenkins-hbase4:46825] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:16,957 INFO [RS:0;jenkins-hbase4:42899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@47397f67{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:16,957 INFO [RS:2;jenkins-hbase4:38221] server.AbstractConnector(383): Stopped ServerConnector@38def70b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:16,957 INFO [RS:3;jenkins-hbase4:44507] server.AbstractConnector(383): Stopped ServerConnector@4bd213f4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:16,958 INFO [RS:2;jenkins-hbase4:38221] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:16,958 INFO [RS:1;jenkins-hbase4:46825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a8ac131{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:16,959 INFO [RS:0;jenkins-hbase4:42899] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:16,960 INFO [RS:1;jenkins-hbase4:46825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@472a49f2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:16,958 INFO [RS:3;jenkins-hbase4:44507] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:16,960 INFO [RS:2;jenkins-hbase4:38221] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@cf5170{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:16,960 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:16,962 INFO [RS:2;jenkins-hbase4:38221] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4efaa00c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:16,960 INFO [RS:0;jenkins-hbase4:42899] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:16,962 INFO [RS:0;jenkins-hbase4:42899] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:16,962 INFO [RS:1;jenkins-hbase4:46825] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:16,962 INFO [RS:3;jenkins-hbase4:44507] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c632bc4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:16,962 INFO [RS:1;jenkins-hbase4:46825] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:16,963 INFO [RS:1;jenkins-hbase4:46825] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:16,962 INFO [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:16,963 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(3305): Received CLOSE for 77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:16,963 DEBUG [RS:0;jenkins-hbase4:42899] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x665b7fd1 to 127.0.0.1:55220 2023-07-18 19:15:16,963 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:16,963 DEBUG [RS:0;jenkins-hbase4:42899] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:16,963 INFO [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42899,1689707713380; all regions closed. 2023-07-18 19:15:16,963 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:16,963 DEBUG [RS:1;jenkins-hbase4:46825] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0254b24f to 127.0.0.1:55220 2023-07-18 19:15:16,963 DEBUG [RS:1;jenkins-hbase4:46825] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:16,963 INFO [RS:1;jenkins-hbase4:46825] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:16,963 INFO [RS:1;jenkins-hbase4:46825] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:16,963 INFO [RS:1;jenkins-hbase4:46825] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:16,963 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 19:15:16,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 77e6d5a7493d3a9d5ff26a5f498a28d8, disabling compactions & flushes 2023-07-18 19:15:16,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:16,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:16,964 INFO [RS:3;jenkins-hbase4:44507] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5acf0cd7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:16,964 INFO [RS:2;jenkins-hbase4:38221] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:16,965 INFO [RS:2;jenkins-hbase4:38221] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:16,965 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:16,965 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 19:15:16,965 INFO [RS:2;jenkins-hbase4:38221] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:16,965 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 77e6d5a7493d3a9d5ff26a5f498a28d8=hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8.} 2023-07-18 19:15:16,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. after waiting 1 ms 2023-07-18 19:15:16,965 DEBUG [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1504): Waiting on 1588230740, 77e6d5a7493d3a9d5ff26a5f498a28d8 2023-07-18 19:15:16,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:16,965 INFO [RS:3;jenkins-hbase4:44507] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 19:15:16,966 INFO [RS:3;jenkins-hbase4:44507] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 19:15:16,965 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 19:15:16,966 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 19:15:16,966 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 19:15:16,965 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(3305): Received CLOSE for b9272c44f2ba649af542994b09338576 2023-07-18 19:15:16,966 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 19:15:16,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 77e6d5a7493d3a9d5ff26a5f498a28d8 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-18 19:15:16,966 INFO [RS:3;jenkins-hbase4:44507] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 19:15:16,966 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 19:15:16,966 INFO [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:16,966 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 19:15:16,966 DEBUG [RS:3;jenkins-hbase4:44507] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x43293427 to 127.0.0.1:55220 2023-07-18 19:15:16,966 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-18 19:15:16,966 DEBUG [RS:3;jenkins-hbase4:44507] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:16,966 INFO [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44507,1689707714791; all regions closed. 2023-07-18 19:15:16,967 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:16,967 DEBUG [RS:2;jenkins-hbase4:38221] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x23953b41 to 127.0.0.1:55220 2023-07-18 19:15:16,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b9272c44f2ba649af542994b09338576, disabling compactions & flushes 2023-07-18 19:15:16,967 DEBUG [RS:2;jenkins-hbase4:38221] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:16,968 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 19:15:16,968 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1478): Online Regions={b9272c44f2ba649af542994b09338576=hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576.} 2023-07-18 19:15:16,968 DEBUG [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1504): Waiting on b9272c44f2ba649af542994b09338576 2023-07-18 19:15:16,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:16,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:16,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. after waiting 0 ms 2023-07-18 19:15:16,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:16,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b9272c44f2ba649af542994b09338576 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-18 19:15:16,977 DEBUG [RS:0;jenkins-hbase4:42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs 2023-07-18 19:15:16,977 INFO [RS:0;jenkins-hbase4:42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42899%2C1689707713380:(num 1689707713955) 2023-07-18 19:15:16,977 DEBUG [RS:0;jenkins-hbase4:42899] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:16,977 INFO [RS:0;jenkins-hbase4:42899] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:16,978 INFO [RS:0;jenkins-hbase4:42899] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:16,978 INFO [RS:0;jenkins-hbase4:42899] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:16,978 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:16,978 INFO [RS:0;jenkins-hbase4:42899] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:16,978 INFO [RS:0;jenkins-hbase4:42899] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:16,979 INFO [RS:0;jenkins-hbase4:42899] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42899 2023-07-18 19:15:16,988 DEBUG [RS:3;jenkins-hbase4:44507] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs 2023-07-18 19:15:16,988 INFO [RS:3;jenkins-hbase4:44507] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44507%2C1689707714791:(num 1689707715018) 2023-07-18 19:15:16,989 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:16,989 DEBUG [RS:3;jenkins-hbase4:44507] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:16,989 INFO [RS:3;jenkins-hbase4:44507] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:16,991 INFO [RS:3;jenkins-hbase4:44507] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:16,992 INFO [RS:3;jenkins-hbase4:44507] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:16,992 INFO [RS:3;jenkins-hbase4:44507] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:16,992 INFO [RS:3;jenkins-hbase4:44507] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:16,992 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:16,996 INFO [RS:3;jenkins-hbase4:44507] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44507 2023-07-18 19:15:16,999 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/.tmp/info/bd7368357a9d4c43a060a1f0275b6d12 2023-07-18 19:15:17,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8/.tmp/m/c098a9df3b054e74bafe80821558a013 2023-07-18 19:15:17,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576/.tmp/info/e651e8025bca44059666feec9dc8f204 2023-07-18 19:15:17,007 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bd7368357a9d4c43a060a1f0275b6d12 2023-07-18 19:15:17,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c098a9df3b054e74bafe80821558a013 2023-07-18 19:15:17,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8/.tmp/m/c098a9df3b054e74bafe80821558a013 as hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8/m/c098a9df3b054e74bafe80821558a013 2023-07-18 19:15:17,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e651e8025bca44059666feec9dc8f204 2023-07-18 19:15:17,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576/.tmp/info/e651e8025bca44059666feec9dc8f204 as hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576/info/e651e8025bca44059666feec9dc8f204 2023-07-18 19:15:17,014 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c098a9df3b054e74bafe80821558a013 2023-07-18 19:15:17,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8/m/c098a9df3b054e74bafe80821558a013, entries=12, sequenceid=29, filesize=5.4 K 2023-07-18 19:15:17,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 77e6d5a7493d3a9d5ff26a5f498a28d8 in 51ms, sequenceid=29, compaction requested=false 2023-07-18 19:15:17,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e651e8025bca44059666feec9dc8f204 2023-07-18 19:15:17,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576/info/e651e8025bca44059666feec9dc8f204, entries=3, sequenceid=9, filesize=5.0 K 2023-07-18 19:15:17,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for b9272c44f2ba649af542994b09338576 in 53ms, sequenceid=9, compaction requested=false 2023-07-18 19:15:17,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/rsgroup/77e6d5a7493d3a9d5ff26a5f498a28d8/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-18 19:15:17,041 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/namespace/b9272c44f2ba649af542994b09338576/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-18 19:15:17,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:15:17,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:17,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:17,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 77e6d5a7493d3a9d5ff26a5f498a28d8: 2023-07-18 19:15:17,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b9272c44f2ba649af542994b09338576: 2023-07-18 19:15:17,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689707714261.77e6d5a7493d3a9d5ff26a5f498a28d8. 2023-07-18 19:15:17,042 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689707714161.b9272c44f2ba649af542994b09338576. 2023-07-18 19:15:17,048 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:17,048 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:17,049 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/.tmp/rep_barrier/d9eecd6bc358405fbba70d090439344f 2023-07-18 19:15:17,055 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d9eecd6bc358405fbba70d090439344f 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:17,072 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/.tmp/table/93bac5ad53db42f98e80ea9eb4e3e06a 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44507,1689707714791 2023-07-18 19:15:17,072 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:17,073 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:17,073 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:17,073 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:17,073 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42899,1689707713380] 2023-07-18 19:15:17,073 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42899,1689707713380; numProcessing=1 2023-07-18 19:15:17,073 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42899,1689707713380 2023-07-18 19:15:17,076 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42899,1689707713380 already deleted, retry=false 2023-07-18 19:15:17,077 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42899,1689707713380 expired; onlineServers=3 2023-07-18 19:15:17,077 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44507,1689707714791] 2023-07-18 19:15:17,077 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44507,1689707714791; numProcessing=2 2023-07-18 19:15:17,078 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44507,1689707714791 already deleted, retry=false 2023-07-18 19:15:17,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 93bac5ad53db42f98e80ea9eb4e3e06a 2023-07-18 19:15:17,078 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44507,1689707714791 expired; onlineServers=2 2023-07-18 19:15:17,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/.tmp/info/bd7368357a9d4c43a060a1f0275b6d12 as hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/info/bd7368357a9d4c43a060a1f0275b6d12 2023-07-18 19:15:17,085 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bd7368357a9d4c43a060a1f0275b6d12 2023-07-18 19:15:17,085 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/info/bd7368357a9d4c43a060a1f0275b6d12, entries=22, sequenceid=26, filesize=7.3 K 2023-07-18 19:15:17,086 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/.tmp/rep_barrier/d9eecd6bc358405fbba70d090439344f as hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/rep_barrier/d9eecd6bc358405fbba70d090439344f 2023-07-18 19:15:17,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d9eecd6bc358405fbba70d090439344f 2023-07-18 19:15:17,091 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/rep_barrier/d9eecd6bc358405fbba70d090439344f, entries=1, sequenceid=26, filesize=4.9 K 2023-07-18 19:15:17,092 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/.tmp/table/93bac5ad53db42f98e80ea9eb4e3e06a as hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/table/93bac5ad53db42f98e80ea9eb4e3e06a 2023-07-18 19:15:17,097 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 93bac5ad53db42f98e80ea9eb4e3e06a 2023-07-18 19:15:17,098 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/table/93bac5ad53db42f98e80ea9eb4e3e06a, entries=6, sequenceid=26, filesize=5.1 K 2023-07-18 19:15:17,098 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 132ms, sequenceid=26, compaction requested=false 2023-07-18 19:15:17,109 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-18 19:15:17,109 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 19:15:17,109 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 19:15:17,109 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 19:15:17,110 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 19:15:17,165 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46825,1689707713423; all regions closed. 2023-07-18 19:15:17,168 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38221,1689707713460; all regions closed. 2023-07-18 19:15:17,176 DEBUG [RS:1;jenkins-hbase4:46825] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs 2023-07-18 19:15:17,176 INFO [RS:1;jenkins-hbase4:46825] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46825%2C1689707713423.meta:.meta(num 1689707714096) 2023-07-18 19:15:17,177 DEBUG [RS:2;jenkins-hbase4:38221] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs 2023-07-18 19:15:17,177 INFO [RS:2;jenkins-hbase4:38221] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38221%2C1689707713460:(num 1689707713960) 2023-07-18 19:15:17,177 DEBUG [RS:2;jenkins-hbase4:38221] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:17,177 INFO [RS:2;jenkins-hbase4:38221] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:17,178 INFO [RS:2;jenkins-hbase4:38221] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:17,178 INFO [RS:2;jenkins-hbase4:38221] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 19:15:17,178 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:17,178 INFO [RS:2;jenkins-hbase4:38221] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 19:15:17,178 INFO [RS:2;jenkins-hbase4:38221] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 19:15:17,179 INFO [RS:2;jenkins-hbase4:38221] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38221 2023-07-18 19:15:17,185 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/WALs/jenkins-hbase4.apache.org,46825,1689707713423/jenkins-hbase4.apache.org%2C46825%2C1689707713423.1689707713964 not finished, retry = 0 2023-07-18 19:15:17,188 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:17,188 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:17,188 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38221,1689707713460 2023-07-18 19:15:17,189 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38221,1689707713460] 2023-07-18 19:15:17,189 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38221,1689707713460; numProcessing=3 2023-07-18 19:15:17,190 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38221,1689707713460 already deleted, retry=false 2023-07-18 19:15:17,190 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38221,1689707713460 expired; onlineServers=1 2023-07-18 19:15:17,288 DEBUG [RS:1;jenkins-hbase4:46825] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/oldWALs 2023-07-18 19:15:17,288 INFO [RS:1;jenkins-hbase4:46825] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46825%2C1689707713423:(num 1689707713964) 2023-07-18 19:15:17,288 DEBUG [RS:1;jenkins-hbase4:46825] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:17,288 INFO [RS:1;jenkins-hbase4:46825] regionserver.LeaseManager(133): Closed leases 2023-07-18 19:15:17,288 INFO [RS:1;jenkins-hbase4:46825] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 19:15:17,288 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:17,290 INFO [RS:1;jenkins-hbase4:46825] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46825 2023-07-18 19:15:17,291 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46825,1689707713423 2023-07-18 19:15:17,291 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 19:15:17,294 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46825,1689707713423] 2023-07-18 19:15:17,294 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46825,1689707713423; numProcessing=4 2023-07-18 19:15:17,295 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46825,1689707713423 already deleted, retry=false 2023-07-18 19:15:17,295 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46825,1689707713423 expired; onlineServers=0 2023-07-18 19:15:17,295 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43365,1689707713306' ***** 2023-07-18 19:15:17,295 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 19:15:17,296 DEBUG [M:0;jenkins-hbase4:43365] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d1c9445, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 19:15:17,296 INFO [M:0;jenkins-hbase4:43365] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 19:15:17,298 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 19:15:17,298 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 19:15:17,298 INFO [M:0;jenkins-hbase4:43365] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7e676d97{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 19:15:17,298 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 19:15:17,299 INFO [M:0;jenkins-hbase4:43365] server.AbstractConnector(383): Stopped ServerConnector@f1a837d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:17,299 INFO [M:0;jenkins-hbase4:43365] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 19:15:17,299 INFO [M:0;jenkins-hbase4:43365] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@40c50e37{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 19:15:17,300 INFO [M:0;jenkins-hbase4:43365] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@212fee63{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/hadoop.log.dir/,STOPPED} 2023-07-18 19:15:17,300 INFO [M:0;jenkins-hbase4:43365] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43365,1689707713306 2023-07-18 19:15:17,300 INFO [M:0;jenkins-hbase4:43365] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43365,1689707713306; all regions closed. 2023-07-18 19:15:17,300 DEBUG [M:0;jenkins-hbase4:43365] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 19:15:17,300 INFO [M:0;jenkins-hbase4:43365] master.HMaster(1491): Stopping master jetty server 2023-07-18 19:15:17,301 INFO [M:0;jenkins-hbase4:43365] server.AbstractConnector(383): Stopped ServerConnector@4d23f4a6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 19:15:17,301 DEBUG [M:0;jenkins-hbase4:43365] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 19:15:17,301 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 19:15:17,301 DEBUG [M:0;jenkins-hbase4:43365] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 19:15:17,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707713666] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689707713666,5,FailOnTimeoutGroup] 2023-07-18 19:15:17,301 INFO [M:0;jenkins-hbase4:43365] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 19:15:17,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707713666] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689707713666,5,FailOnTimeoutGroup] 2023-07-18 19:15:17,301 INFO [M:0;jenkins-hbase4:43365] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 19:15:17,301 INFO [M:0;jenkins-hbase4:43365] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 19:15:17,301 DEBUG [M:0;jenkins-hbase4:43365] master.HMaster(1512): Stopping service threads 2023-07-18 19:15:17,301 INFO [M:0;jenkins-hbase4:43365] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 19:15:17,302 ERROR [M:0;jenkins-hbase4:43365] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 19:15:17,302 INFO [M:0;jenkins-hbase4:43365] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 19:15:17,302 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 19:15:17,302 DEBUG [M:0;jenkins-hbase4:43365] zookeeper.ZKUtil(398): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 19:15:17,302 WARN [M:0;jenkins-hbase4:43365] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 19:15:17,302 INFO [M:0;jenkins-hbase4:43365] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 19:15:17,302 INFO [M:0;jenkins-hbase4:43365] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 19:15:17,302 DEBUG [M:0;jenkins-hbase4:43365] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 19:15:17,302 INFO [M:0;jenkins-hbase4:43365] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:17,302 DEBUG [M:0;jenkins-hbase4:43365] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:17,302 DEBUG [M:0;jenkins-hbase4:43365] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 19:15:17,303 DEBUG [M:0;jenkins-hbase4:43365] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:17,303 INFO [M:0;jenkins-hbase4:43365] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.20 KB heapSize=90.66 KB 2023-07-18 19:15:17,314 INFO [M:0;jenkins-hbase4:43365] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.20 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d0a6f14fc13846c7af7d970671688242 2023-07-18 19:15:17,320 DEBUG [M:0;jenkins-hbase4:43365] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/d0a6f14fc13846c7af7d970671688242 as hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d0a6f14fc13846c7af7d970671688242 2023-07-18 19:15:17,325 INFO [M:0;jenkins-hbase4:43365] regionserver.HStore(1080): Added hdfs://localhost:38571/user/jenkins/test-data/5ce8cb9a-6090-e3d8-e54c-f32af6b2c884/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/d0a6f14fc13846c7af7d970671688242, entries=22, sequenceid=175, filesize=11.1 K 2023-07-18 19:15:17,326 INFO [M:0;jenkins-hbase4:43365] regionserver.HRegion(2948): Finished flush of dataSize ~76.20 KB/78030, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=175, compaction requested=false 2023-07-18 19:15:17,328 INFO [M:0;jenkins-hbase4:43365] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 19:15:17,328 DEBUG [M:0;jenkins-hbase4:43365] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 19:15:17,331 INFO [M:0;jenkins-hbase4:43365] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 19:15:17,331 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 19:15:17,332 INFO [M:0;jenkins-hbase4:43365] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43365 2023-07-18 19:15:17,333 DEBUG [M:0;jenkins-hbase4:43365] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43365,1689707713306 already deleted, retry=false 2023-07-18 19:15:17,546 INFO [M:0;jenkins-hbase4:43365] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43365,1689707713306; zookeeper connection closed. 2023-07-18 19:15:17,546 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,547 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): master:43365-0x10179dc16070000, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,647 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,647 INFO [RS:1;jenkins-hbase4:46825] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46825,1689707713423; zookeeper connection closed. 2023-07-18 19:15:17,647 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:46825-0x10179dc16070002, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,647 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@8e810a4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@8e810a4 2023-07-18 19:15:17,747 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,747 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:38221-0x10179dc16070003, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,747 INFO [RS:2;jenkins-hbase4:38221] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38221,1689707713460; zookeeper connection closed. 2023-07-18 19:15:17,748 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5fb30977] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5fb30977 2023-07-18 19:15:17,848 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,848 INFO [RS:3;jenkins-hbase4:44507] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44507,1689707714791; zookeeper connection closed. 2023-07-18 19:15:17,848 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:44507-0x10179dc1607000b, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,848 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2ecd7698] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2ecd7698 2023-07-18 19:15:17,948 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,948 INFO [RS:0;jenkins-hbase4:42899] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42899,1689707713380; zookeeper connection closed. 2023-07-18 19:15:17,948 DEBUG [Listener at localhost/39045-EventThread] zookeeper.ZKWatcher(600): regionserver:42899-0x10179dc16070001, quorum=127.0.0.1:55220, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 19:15:17,948 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5c5c0722] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5c5c0722 2023-07-18 19:15:17,948 INFO [Listener at localhost/39045] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 19:15:17,949 WARN [Listener at localhost/39045] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:17,952 INFO [Listener at localhost/39045] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:18,055 WARN [BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:18,055 WARN [BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-418551092-172.31.14.131-1689707712498 (Datanode Uuid e1e890d4-3692-4053-92e3-d1d407d6aa08) service to localhost/127.0.0.1:38571 2023-07-18 19:15:18,056 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data5/current/BP-418551092-172.31.14.131-1689707712498] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:18,056 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data6/current/BP-418551092-172.31.14.131-1689707712498] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:18,058 WARN [Listener at localhost/39045] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:18,060 INFO [Listener at localhost/39045] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:18,163 WARN [BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:18,163 WARN [BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-418551092-172.31.14.131-1689707712498 (Datanode Uuid 5c030bd4-bd7e-4b72-8ba7-967bcedafe25) service to localhost/127.0.0.1:38571 2023-07-18 19:15:18,163 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data3/current/BP-418551092-172.31.14.131-1689707712498] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:18,164 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data4/current/BP-418551092-172.31.14.131-1689707712498] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:18,165 WARN [Listener at localhost/39045] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 19:15:18,168 INFO [Listener at localhost/39045] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:18,271 WARN [BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 19:15:18,272 WARN [BP-418551092-172.31.14.131-1689707712498 heartbeating to localhost/127.0.0.1:38571] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-418551092-172.31.14.131-1689707712498 (Datanode Uuid 04ce664a-c5ee-43dc-9ad6-67256d4dad95) service to localhost/127.0.0.1:38571 2023-07-18 19:15:18,272 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data1/current/BP-418551092-172.31.14.131-1689707712498] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:18,273 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b4553d28-9edb-db0a-99e4-d13db87bdcb9/cluster_340b5f55-8bb8-4c60-b715-d2cd1d60a9a0/dfs/data/data2/current/BP-418551092-172.31.14.131-1689707712498] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 19:15:18,283 INFO [Listener at localhost/39045] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 19:15:18,398 INFO [Listener at localhost/39045] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 19:15:18,427 INFO [Listener at localhost/39045] hbase.HBaseTestingUtility(1293): Minicluster is down