2023-07-15 18:14:54,916 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9 2023-07-15 18:14:54,933 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-15 18:14:54,950 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-15 18:14:54,950 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f, deleteOnExit=true 2023-07-15 18:14:54,951 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-15 18:14:54,951 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/test.cache.data in system properties and HBase conf 2023-07-15 18:14:54,951 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.tmp.dir in system properties and HBase conf 2023-07-15 18:14:54,952 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir in system properties and HBase conf 2023-07-15 18:14:54,952 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-15 18:14:54,953 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-15 18:14:54,953 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-15 18:14:55,096 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-15 18:14:55,611 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-15 18:14:55,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-15 18:14:55,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-15 18:14:55,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-15 18:14:55,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 18:14:55,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-15 18:14:55,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-15 18:14:55,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 18:14:55,619 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 18:14:55,619 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-15 18:14:55,620 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/nfs.dump.dir in system properties and HBase conf 2023-07-15 18:14:55,620 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/java.io.tmpdir in system properties and HBase conf 2023-07-15 18:14:55,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 18:14:55,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-15 18:14:55,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-15 18:14:56,233 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 18:14:56,238 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 18:14:56,602 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-15 18:14:56,791 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-15 18:14:56,806 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:14:56,843 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:14:56,876 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/java.io.tmpdir/Jetty_localhost_43591_hdfs____.epnm7c/webapp 2023-07-15 18:14:57,020 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43591 2023-07-15 18:14:57,063 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 18:14:57,063 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 18:14:57,599 WARN [Listener at localhost/44585] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:14:57,735 WARN [Listener at localhost/44585] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:14:57,759 WARN [Listener at localhost/44585] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:14:57,768 INFO [Listener at localhost/44585] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:14:57,778 INFO [Listener at localhost/44585] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/java.io.tmpdir/Jetty_localhost_41873_datanode____.azq78i/webapp 2023-07-15 18:14:57,918 INFO [Listener at localhost/44585] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41873 2023-07-15 18:14:58,370 WARN [Listener at localhost/38129] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:14:58,405 WARN [Listener at localhost/38129] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:14:58,409 WARN [Listener at localhost/38129] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:14:58,413 INFO [Listener at localhost/38129] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:14:58,420 INFO [Listener at localhost/38129] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/java.io.tmpdir/Jetty_localhost_34971_datanode____.o2eix3/webapp 2023-07-15 18:14:58,516 INFO [Listener at localhost/38129] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34971 2023-07-15 18:14:58,530 WARN [Listener at localhost/41565] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:14:58,547 WARN [Listener at localhost/41565] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:14:58,550 WARN [Listener at localhost/41565] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:14:58,552 INFO [Listener at localhost/41565] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:14:58,557 INFO [Listener at localhost/41565] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/java.io.tmpdir/Jetty_localhost_32771_datanode____dzifll/webapp 2023-07-15 18:14:58,670 INFO [Listener at localhost/41565] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32771 2023-07-15 18:14:58,678 WARN [Listener at localhost/40085] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:14:58,931 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd1dbd8bcc1569f52: Processing first storage report for DS-00566f2b-0518-49e6-9ca8-6db1edc7b717 from datanode 29c7eb7e-84fd-4d72-8500-ffa97dbd968b 2023-07-15 18:14:58,933 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd1dbd8bcc1569f52: from storage DS-00566f2b-0518-49e6-9ca8-6db1edc7b717 node DatanodeRegistration(127.0.0.1:40573, datanodeUuid=29c7eb7e-84fd-4d72-8500-ffa97dbd968b, infoPort=36155, infoSecurePort=0, ipcPort=40085, storageInfo=lv=-57;cid=testClusterID;nsid=88584819;c=1689444896326), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-15 18:14:58,934 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1d2a0046b2e868fb: Processing first storage report for DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62 from datanode 6ac438f9-c133-464b-a936-70d4d7e9dd46 2023-07-15 18:14:58,934 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1d2a0046b2e868fb: from storage DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62 node DatanodeRegistration(127.0.0.1:37573, datanodeUuid=6ac438f9-c133-464b-a936-70d4d7e9dd46, infoPort=40723, infoSecurePort=0, ipcPort=38129, storageInfo=lv=-57;cid=testClusterID;nsid=88584819;c=1689444896326), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:14:58,934 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x49b372066fec401f: Processing first storage report for DS-81d85367-4607-466b-a028-36462b1964fb from datanode f09ed7b3-8b8e-4b4f-be93-070d5e76138b 2023-07-15 18:14:58,934 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x49b372066fec401f: from storage DS-81d85367-4607-466b-a028-36462b1964fb node DatanodeRegistration(127.0.0.1:46049, datanodeUuid=f09ed7b3-8b8e-4b4f-be93-070d5e76138b, infoPort=41025, infoSecurePort=0, ipcPort=41565, storageInfo=lv=-57;cid=testClusterID;nsid=88584819;c=1689444896326), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:14:58,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd1dbd8bcc1569f52: Processing first storage report for DS-2971ff5f-a96a-433e-81bf-f6776e33c12e from datanode 29c7eb7e-84fd-4d72-8500-ffa97dbd968b 2023-07-15 18:14:58,935 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd1dbd8bcc1569f52: from storage DS-2971ff5f-a96a-433e-81bf-f6776e33c12e node DatanodeRegistration(127.0.0.1:40573, datanodeUuid=29c7eb7e-84fd-4d72-8500-ffa97dbd968b, infoPort=36155, infoSecurePort=0, ipcPort=40085, storageInfo=lv=-57;cid=testClusterID;nsid=88584819;c=1689444896326), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:14:58,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1d2a0046b2e868fb: Processing first storage report for DS-a873a9a2-2eb1-429a-9716-d6408e111497 from datanode 6ac438f9-c133-464b-a936-70d4d7e9dd46 2023-07-15 18:14:58,935 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1d2a0046b2e868fb: from storage DS-a873a9a2-2eb1-429a-9716-d6408e111497 node DatanodeRegistration(127.0.0.1:37573, datanodeUuid=6ac438f9-c133-464b-a936-70d4d7e9dd46, infoPort=40723, infoSecurePort=0, ipcPort=38129, storageInfo=lv=-57;cid=testClusterID;nsid=88584819;c=1689444896326), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:14:58,935 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x49b372066fec401f: Processing first storage report for DS-75810ee1-f19b-49da-ad26-fe71e9712649 from datanode f09ed7b3-8b8e-4b4f-be93-070d5e76138b 2023-07-15 18:14:58,936 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x49b372066fec401f: from storage DS-75810ee1-f19b-49da-ad26-fe71e9712649 node DatanodeRegistration(127.0.0.1:46049, datanodeUuid=f09ed7b3-8b8e-4b4f-be93-070d5e76138b, infoPort=41025, infoSecurePort=0, ipcPort=41565, storageInfo=lv=-57;cid=testClusterID;nsid=88584819;c=1689444896326), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-15 18:14:59,169 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9 2023-07-15 18:14:59,249 INFO [Listener at localhost/40085] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/zookeeper_0, clientPort=54099, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-15 18:14:59,269 INFO [Listener at localhost/40085] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54099 2023-07-15 18:14:59,280 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:14:59,282 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:00,011 INFO [Listener at localhost/40085] util.FSUtils(471): Created version file at hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955 with version=8 2023-07-15 18:15:00,011 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/hbase-staging 2023-07-15 18:15:00,022 DEBUG [Listener at localhost/40085] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-15 18:15:00,022 DEBUG [Listener at localhost/40085] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-15 18:15:00,023 DEBUG [Listener at localhost/40085] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-15 18:15:00,023 DEBUG [Listener at localhost/40085] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-15 18:15:00,437 INFO [Listener at localhost/40085] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-15 18:15:00,978 INFO [Listener at localhost/40085] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:01,022 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:01,023 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:01,023 INFO [Listener at localhost/40085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:01,023 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:01,024 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:01,177 INFO [Listener at localhost/40085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:01,279 DEBUG [Listener at localhost/40085] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-15 18:15:01,372 INFO [Listener at localhost/40085] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41169 2023-07-15 18:15:01,384 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:01,386 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:01,415 INFO [Listener at localhost/40085] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41169 connecting to ZooKeeper ensemble=127.0.0.1:54099 2023-07-15 18:15:01,467 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:411690x0, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:01,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41169-0x1016a31dca10000 connected 2023-07-15 18:15:01,502 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:01,503 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:01,507 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:01,518 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41169 2023-07-15 18:15:01,518 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41169 2023-07-15 18:15:01,519 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41169 2023-07-15 18:15:01,519 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41169 2023-07-15 18:15:01,519 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41169 2023-07-15 18:15:01,558 INFO [Listener at localhost/40085] log.Log(170): Logging initialized @7353ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-15 18:15:01,711 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:01,712 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:01,712 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:01,714 INFO [Listener at localhost/40085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-15 18:15:01,714 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:01,715 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:01,718 INFO [Listener at localhost/40085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:01,787 INFO [Listener at localhost/40085] http.HttpServer(1146): Jetty bound to port 38831 2023-07-15 18:15:01,789 INFO [Listener at localhost/40085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:01,826 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:01,830 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@79ec65fd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:01,831 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:01,831 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@66ba5314{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:01,905 INFO [Listener at localhost/40085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:01,920 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:01,920 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:01,923 INFO [Listener at localhost/40085] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 18:15:01,931 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:01,958 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@44a80d6e{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-15 18:15:01,971 INFO [Listener at localhost/40085] server.AbstractConnector(333): Started ServerConnector@23d12ea9{HTTP/1.1, (http/1.1)}{0.0.0.0:38831} 2023-07-15 18:15:01,971 INFO [Listener at localhost/40085] server.Server(415): Started @7767ms 2023-07-15 18:15:01,974 INFO [Listener at localhost/40085] master.HMaster(444): hbase.rootdir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955, hbase.cluster.distributed=false 2023-07-15 18:15:02,055 INFO [Listener at localhost/40085] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:02,055 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,055 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,055 INFO [Listener at localhost/40085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:02,055 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,056 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:02,063 INFO [Listener at localhost/40085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:02,067 INFO [Listener at localhost/40085] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44901 2023-07-15 18:15:02,069 INFO [Listener at localhost/40085] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:02,076 DEBUG [Listener at localhost/40085] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:02,078 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:02,080 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:02,082 INFO [Listener at localhost/40085] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44901 connecting to ZooKeeper ensemble=127.0.0.1:54099 2023-07-15 18:15:02,086 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:449010x0, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:02,088 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44901-0x1016a31dca10001 connected 2023-07-15 18:15:02,088 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:02,089 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:02,090 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:02,091 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44901 2023-07-15 18:15:02,093 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44901 2023-07-15 18:15:02,095 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44901 2023-07-15 18:15:02,095 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44901 2023-07-15 18:15:02,098 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44901 2023-07-15 18:15:02,101 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:02,101 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:02,101 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:02,103 INFO [Listener at localhost/40085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:02,103 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:02,103 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:02,103 INFO [Listener at localhost/40085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:02,106 INFO [Listener at localhost/40085] http.HttpServer(1146): Jetty bound to port 36667 2023-07-15 18:15:02,106 INFO [Listener at localhost/40085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:02,116 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,117 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4e7f1c55{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:02,117 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,118 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78c50a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:02,134 INFO [Listener at localhost/40085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:02,136 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:02,136 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:02,136 INFO [Listener at localhost/40085] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 18:15:02,141 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,146 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5fac79e5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:02,148 INFO [Listener at localhost/40085] server.AbstractConnector(333): Started ServerConnector@4f218500{HTTP/1.1, (http/1.1)}{0.0.0.0:36667} 2023-07-15 18:15:02,148 INFO [Listener at localhost/40085] server.Server(415): Started @7943ms 2023-07-15 18:15:02,166 INFO [Listener at localhost/40085] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:02,166 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,167 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,167 INFO [Listener at localhost/40085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:02,167 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,168 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:02,168 INFO [Listener at localhost/40085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:02,171 INFO [Listener at localhost/40085] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39889 2023-07-15 18:15:02,172 INFO [Listener at localhost/40085] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:02,173 DEBUG [Listener at localhost/40085] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:02,175 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:02,177 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:02,178 INFO [Listener at localhost/40085] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39889 connecting to ZooKeeper ensemble=127.0.0.1:54099 2023-07-15 18:15:02,185 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:398890x0, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:02,187 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39889-0x1016a31dca10002 connected 2023-07-15 18:15:02,187 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:02,188 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:02,189 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:02,189 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39889 2023-07-15 18:15:02,191 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39889 2023-07-15 18:15:02,193 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39889 2023-07-15 18:15:02,194 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39889 2023-07-15 18:15:02,195 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39889 2023-07-15 18:15:02,198 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:02,198 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:02,198 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:02,199 INFO [Listener at localhost/40085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:02,199 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:02,199 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:02,200 INFO [Listener at localhost/40085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:02,200 INFO [Listener at localhost/40085] http.HttpServer(1146): Jetty bound to port 35385 2023-07-15 18:15:02,200 INFO [Listener at localhost/40085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:02,211 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,212 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e324a0c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:02,212 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,212 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d567319{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:02,220 INFO [Listener at localhost/40085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:02,221 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:02,221 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:02,222 INFO [Listener at localhost/40085] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 18:15:02,223 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,224 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@35ca6370{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:02,225 INFO [Listener at localhost/40085] server.AbstractConnector(333): Started ServerConnector@402f020d{HTTP/1.1, (http/1.1)}{0.0.0.0:35385} 2023-07-15 18:15:02,225 INFO [Listener at localhost/40085] server.Server(415): Started @8021ms 2023-07-15 18:15:02,238 INFO [Listener at localhost/40085] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:02,238 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,238 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,238 INFO [Listener at localhost/40085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:02,238 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:02,239 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:02,239 INFO [Listener at localhost/40085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:02,240 INFO [Listener at localhost/40085] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40191 2023-07-15 18:15:02,241 INFO [Listener at localhost/40085] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:02,242 DEBUG [Listener at localhost/40085] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:02,243 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:02,245 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:02,246 INFO [Listener at localhost/40085] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40191 connecting to ZooKeeper ensemble=127.0.0.1:54099 2023-07-15 18:15:02,259 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:401910x0, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:02,264 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40191-0x1016a31dca10003 connected 2023-07-15 18:15:02,264 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:02,265 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:02,266 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:02,271 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40191 2023-07-15 18:15:02,271 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40191 2023-07-15 18:15:02,271 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40191 2023-07-15 18:15:02,275 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40191 2023-07-15 18:15:02,275 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40191 2023-07-15 18:15:02,277 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:02,278 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:02,278 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:02,278 INFO [Listener at localhost/40085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:02,278 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:02,279 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:02,279 INFO [Listener at localhost/40085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:02,280 INFO [Listener at localhost/40085] http.HttpServer(1146): Jetty bound to port 46677 2023-07-15 18:15:02,280 INFO [Listener at localhost/40085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:02,284 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,284 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@265ffb95{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:02,284 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,285 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ad0f308{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:02,295 INFO [Listener at localhost/40085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:02,296 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:02,296 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:02,296 INFO [Listener at localhost/40085] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 18:15:02,298 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:02,298 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@57d756aa{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:02,300 INFO [Listener at localhost/40085] server.AbstractConnector(333): Started ServerConnector@11b78595{HTTP/1.1, (http/1.1)}{0.0.0.0:46677} 2023-07-15 18:15:02,300 INFO [Listener at localhost/40085] server.Server(415): Started @8096ms 2023-07-15 18:15:02,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:02,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@c49e52f{HTTP/1.1, (http/1.1)}{0.0.0.0:42019} 2023-07-15 18:15:02,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8130ms 2023-07-15 18:15:02,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:02,351 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 18:15:02,353 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:02,376 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:02,376 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:02,376 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:02,378 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:02,379 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:02,380 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 18:15:02,383 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 18:15:02,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41169,1689444900240 from backup master directory 2023-07-15 18:15:02,389 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:02,389 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 18:15:02,391 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:02,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:02,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-15 18:15:02,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-15 18:15:02,522 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/hbase.id with ID: 6555c413-097f-45fa-9eea-583e5f16d41b 2023-07-15 18:15:02,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:02,594 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:02,664 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4f904202 to 127.0.0.1:54099 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:02,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3babd371, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:02,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:02,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-15 18:15:02,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-15 18:15:02,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-15 18:15:02,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-15 18:15:02,762 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-15 18:15:02,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:02,807 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store-tmp 2023-07-15 18:15:02,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:02,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 18:15:02,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:02,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:02,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 18:15:02,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:02,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:02,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:02,850 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/WALs/jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:02,879 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41169%2C1689444900240, suffix=, logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/WALs/jenkins-hbase4.apache.org,41169,1689444900240, archiveDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/oldWALs, maxLogs=10 2023-07-15 18:15:02,947 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK] 2023-07-15 18:15:02,947 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK] 2023-07-15 18:15:02,947 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK] 2023-07-15 18:15:02,956 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-15 18:15:03,042 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/WALs/jenkins-hbase4.apache.org,41169,1689444900240/jenkins-hbase4.apache.org%2C41169%2C1689444900240.1689444902893 2023-07-15 18:15:03,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK], DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK], DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK]] 2023-07-15 18:15:03,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:03,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:03,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:03,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:03,152 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:03,166 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-15 18:15:03,209 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-15 18:15:03,252 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:03,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:03,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:03,297 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:03,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:03,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10495578400, jitterRate=-0.02252309024333954}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:03,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:03,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-15 18:15:03,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-15 18:15:03,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-15 18:15:03,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-15 18:15:03,341 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-15 18:15:03,381 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 40 msec 2023-07-15 18:15:03,381 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-15 18:15:03,407 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-15 18:15:03,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-15 18:15:03,424 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-15 18:15:03,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-15 18:15:03,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-15 18:15:03,438 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:03,440 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-15 18:15:03,440 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-15 18:15:03,459 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-15 18:15:03,464 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:03,464 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:03,464 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:03,464 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:03,464 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:03,465 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41169,1689444900240, sessionid=0x1016a31dca10000, setting cluster-up flag (Was=false) 2023-07-15 18:15:03,486 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:03,491 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-15 18:15:03,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:03,503 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:03,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-15 18:15:03,511 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:03,514 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.hbase-snapshot/.tmp 2023-07-15 18:15:03,593 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-15 18:15:03,606 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(951): ClusterId : 6555c413-097f-45fa-9eea-583e5f16d41b 2023-07-15 18:15:03,607 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(951): ClusterId : 6555c413-097f-45fa-9eea-583e5f16d41b 2023-07-15 18:15:03,607 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(951): ClusterId : 6555c413-097f-45fa-9eea-583e5f16d41b 2023-07-15 18:15:03,608 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-15 18:15:03,611 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:03,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-15 18:15:03,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-15 18:15:03,622 DEBUG [RS:0;jenkins-hbase4:44901] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:03,622 DEBUG [RS:1;jenkins-hbase4:39889] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:03,622 DEBUG [RS:2;jenkins-hbase4:40191] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:03,630 DEBUG [RS:2;jenkins-hbase4:40191] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:03,630 DEBUG [RS:0;jenkins-hbase4:44901] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:03,630 DEBUG [RS:2;jenkins-hbase4:40191] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:03,630 DEBUG [RS:0;jenkins-hbase4:44901] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:03,632 DEBUG [RS:1;jenkins-hbase4:39889] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:03,632 DEBUG [RS:1;jenkins-hbase4:39889] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:03,638 DEBUG [RS:2;jenkins-hbase4:40191] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:03,640 DEBUG [RS:0;jenkins-hbase4:44901] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:03,640 DEBUG [RS:1;jenkins-hbase4:39889] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:03,640 DEBUG [RS:2;jenkins-hbase4:40191] zookeeper.ReadOnlyZKClient(139): Connect 0x05af3583 to 127.0.0.1:54099 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:03,642 DEBUG [RS:0;jenkins-hbase4:44901] zookeeper.ReadOnlyZKClient(139): Connect 0x7ddd3abf to 127.0.0.1:54099 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:03,652 DEBUG [RS:1;jenkins-hbase4:39889] zookeeper.ReadOnlyZKClient(139): Connect 0x7779fbd0 to 127.0.0.1:54099 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:03,676 DEBUG [RS:0;jenkins-hbase4:44901] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@731acb74, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:03,677 DEBUG [RS:0;jenkins-hbase4:44901] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@772551cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:03,677 DEBUG [RS:1;jenkins-hbase4:39889] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79be1b4f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:03,677 DEBUG [RS:1;jenkins-hbase4:39889] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7d8190a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:03,679 DEBUG [RS:2;jenkins-hbase4:40191] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@b68cb46, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:03,679 DEBUG [RS:2;jenkins-hbase4:40191] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a55c17d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:03,714 DEBUG [RS:1;jenkins-hbase4:39889] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:39889 2023-07-15 18:15:03,717 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:40191 2023-07-15 18:15:03,718 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:44901 2023-07-15 18:15:03,723 INFO [RS:1;jenkins-hbase4:39889] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:03,723 INFO [RS:2;jenkins-hbase4:40191] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:03,723 INFO [RS:0;jenkins-hbase4:44901] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:03,725 INFO [RS:0;jenkins-hbase4:44901] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:03,725 INFO [RS:2;jenkins-hbase4:40191] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:03,725 INFO [RS:1;jenkins-hbase4:39889] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:03,725 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:03,725 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:03,725 DEBUG [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:03,730 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41169,1689444900240 with isa=jenkins-hbase4.apache.org/172.31.14.131:44901, startcode=1689444902054 2023-07-15 18:15:03,731 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41169,1689444900240 with isa=jenkins-hbase4.apache.org/172.31.14.131:40191, startcode=1689444902237 2023-07-15 18:15:03,730 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41169,1689444900240 with isa=jenkins-hbase4.apache.org/172.31.14.131:39889, startcode=1689444902165 2023-07-15 18:15:03,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:03,759 DEBUG [RS:2;jenkins-hbase4:40191] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:03,760 DEBUG [RS:1;jenkins-hbase4:39889] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:03,759 DEBUG [RS:0;jenkins-hbase4:44901] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:03,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 18:15:03,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 18:15:03,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 18:15:03,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 18:15:03,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:03,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:03,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:03,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:03,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-15 18:15:03,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:03,825 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:03,825 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:03,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689444933841 2023-07-15 18:15:03,843 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-15 18:15:03,848 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42157, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:03,848 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59799, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:03,848 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38037, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:03,848 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:03,848 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-15 18:15:03,852 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-15 18:15:03,855 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:03,865 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:03,880 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:03,881 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:03,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-15 18:15:03,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-15 18:15:03,891 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-15 18:15:03,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-15 18:15:03,952 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:03,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-15 18:15:03,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-15 18:15:03,958 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-15 18:15:03,958 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(2830): Master is not running yet 2023-07-15 18:15:03,959 DEBUG [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(2830): Master is not running yet 2023-07-15 18:15:03,959 WARN [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-15 18:15:03,959 WARN [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-15 18:15:03,958 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(2830): Master is not running yet 2023-07-15 18:15:03,959 WARN [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-15 18:15:03,962 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-15 18:15:03,963 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-15 18:15:03,972 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444903967,5,FailOnTimeoutGroup] 2023-07-15 18:15:03,987 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444903972,5,FailOnTimeoutGroup] 2023-07-15 18:15:03,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:03,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-15 18:15:04,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,052 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:04,054 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:04,054 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955 2023-07-15 18:15:04,060 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41169,1689444900240 with isa=jenkins-hbase4.apache.org/172.31.14.131:40191, startcode=1689444902237 2023-07-15 18:15:04,060 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41169,1689444900240 with isa=jenkins-hbase4.apache.org/172.31.14.131:44901, startcode=1689444902054 2023-07-15 18:15:04,061 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41169,1689444900240 with isa=jenkins-hbase4.apache.org/172.31.14.131:39889, startcode=1689444902165 2023-07-15 18:15:04,066 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41169] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:04,068 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:04,068 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-15 18:15:04,073 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41169] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,073 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:04,073 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-15 18:15:04,073 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41169] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:04,074 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:04,074 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-15 18:15:04,083 DEBUG [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955 2023-07-15 18:15:04,083 DEBUG [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44585 2023-07-15 18:15:04,084 DEBUG [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38831 2023-07-15 18:15:04,083 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955 2023-07-15 18:15:04,085 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44585 2023-07-15 18:15:04,085 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38831 2023-07-15 18:15:04,086 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955 2023-07-15 18:15:04,087 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44585 2023-07-15 18:15:04,087 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38831 2023-07-15 18:15:04,096 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:04,102 DEBUG [RS:0;jenkins-hbase4:44901] zookeeper.ZKUtil(162): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:04,102 WARN [RS:0;jenkins-hbase4:44901] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:04,102 INFO [RS:0;jenkins-hbase4:44901] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:04,102 DEBUG [RS:2;jenkins-hbase4:40191] zookeeper.ZKUtil(162): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,102 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:04,103 DEBUG [RS:1;jenkins-hbase4:39889] zookeeper.ZKUtil(162): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:04,102 WARN [RS:2;jenkins-hbase4:40191] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:04,103 WARN [RS:1;jenkins-hbase4:39889] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:04,103 INFO [RS:2;jenkins-hbase4:40191] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:04,103 INFO [RS:1;jenkins-hbase4:39889] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:04,104 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,104 DEBUG [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:04,117 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39889,1689444902165] 2023-07-15 18:15:04,118 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44901,1689444902054] 2023-07-15 18:15:04,118 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40191,1689444902237] 2023-07-15 18:15:04,123 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:04,126 DEBUG [RS:2;jenkins-hbase4:40191] zookeeper.ZKUtil(162): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:04,126 DEBUG [RS:1;jenkins-hbase4:39889] zookeeper.ZKUtil(162): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:04,127 DEBUG [RS:1;jenkins-hbase4:39889] zookeeper.ZKUtil(162): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:04,127 DEBUG [RS:2;jenkins-hbase4:40191] zookeeper.ZKUtil(162): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:04,127 DEBUG [RS:1;jenkins-hbase4:39889] zookeeper.ZKUtil(162): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,128 DEBUG [RS:2;jenkins-hbase4:40191] zookeeper.ZKUtil(162): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 18:15:04,129 DEBUG [RS:0;jenkins-hbase4:44901] zookeeper.ZKUtil(162): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:04,130 DEBUG [RS:0;jenkins-hbase4:44901] zookeeper.ZKUtil(162): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:04,130 DEBUG [RS:0;jenkins-hbase4:44901] zookeeper.ZKUtil(162): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,132 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info 2023-07-15 18:15:04,133 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 18:15:04,134 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:04,134 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 18:15:04,138 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:04,138 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 18:15:04,139 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:04,140 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 18:15:04,143 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table 2023-07-15 18:15:04,143 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:04,143 DEBUG [RS:1;jenkins-hbase4:39889] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:04,143 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:04,144 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 18:15:04,146 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:04,152 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740 2023-07-15 18:15:04,162 INFO [RS:0;jenkins-hbase4:44901] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:04,164 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740 2023-07-15 18:15:04,167 INFO [RS:2;jenkins-hbase4:40191] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:04,167 INFO [RS:1;jenkins-hbase4:39889] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:04,169 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 18:15:04,180 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 18:15:04,184 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:04,185 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10826577600, jitterRate=0.008303612470626831}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 18:15:04,185 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 18:15:04,185 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 18:15:04,185 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 18:15:04,185 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 18:15:04,185 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 18:15:04,185 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 18:15:04,189 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:04,189 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 18:15:04,205 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:04,206 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-15 18:15:04,209 INFO [RS:1;jenkins-hbase4:39889] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:04,209 INFO [RS:2;jenkins-hbase4:40191] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:04,209 INFO [RS:0;jenkins-hbase4:44901] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:04,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-15 18:15:04,220 INFO [RS:1;jenkins-hbase4:39889] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:04,220 INFO [RS:0;jenkins-hbase4:44901] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:04,220 INFO [RS:2;jenkins-hbase4:40191] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:04,221 INFO [RS:0;jenkins-hbase4:44901] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,220 INFO [RS:1;jenkins-hbase4:39889] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,221 INFO [RS:2;jenkins-hbase4:40191] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,223 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:04,227 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:04,227 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:04,235 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-15 18:15:04,237 INFO [RS:1;jenkins-hbase4:39889] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,237 INFO [RS:2;jenkins-hbase4:40191] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,237 INFO [RS:0;jenkins-hbase4:44901] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,238 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,238 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,238 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:04,239 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:04,240 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:1;jenkins-hbase4:39889] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,239 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,240 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,241 DEBUG [RS:0;jenkins-hbase4:44901] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,241 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,241 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:04,241 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,241 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,241 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-15 18:15:04,241 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,241 DEBUG [RS:2;jenkins-hbase4:40191] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:04,247 INFO [RS:0;jenkins-hbase4:44901] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,247 INFO [RS:2;jenkins-hbase4:40191] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,247 INFO [RS:0;jenkins-hbase4:44901] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,247 INFO [RS:1;jenkins-hbase4:39889] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,248 INFO [RS:0;jenkins-hbase4:44901] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,247 INFO [RS:2;jenkins-hbase4:40191] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,248 INFO [RS:1;jenkins-hbase4:39889] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,249 INFO [RS:2;jenkins-hbase4:40191] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,249 INFO [RS:1;jenkins-hbase4:39889] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,268 INFO [RS:0;jenkins-hbase4:44901] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:04,270 INFO [RS:2;jenkins-hbase4:40191] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:04,271 INFO [RS:1;jenkins-hbase4:39889] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:04,272 INFO [RS:0;jenkins-hbase4:44901] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44901,1689444902054-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,272 INFO [RS:1;jenkins-hbase4:39889] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39889,1689444902165-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,272 INFO [RS:2;jenkins-hbase4:40191] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40191,1689444902237-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,293 INFO [RS:1;jenkins-hbase4:39889] regionserver.Replication(203): jenkins-hbase4.apache.org,39889,1689444902165 started 2023-07-15 18:15:04,293 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39889,1689444902165, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39889, sessionid=0x1016a31dca10002 2023-07-15 18:15:04,293 DEBUG [RS:1;jenkins-hbase4:39889] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:04,294 DEBUG [RS:1;jenkins-hbase4:39889] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:04,294 DEBUG [RS:1;jenkins-hbase4:39889] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39889,1689444902165' 2023-07-15 18:15:04,294 DEBUG [RS:1;jenkins-hbase4:39889] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:04,295 DEBUG [RS:1;jenkins-hbase4:39889] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:04,296 DEBUG [RS:1;jenkins-hbase4:39889] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:04,296 DEBUG [RS:1;jenkins-hbase4:39889] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:04,296 DEBUG [RS:1;jenkins-hbase4:39889] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:04,296 DEBUG [RS:1;jenkins-hbase4:39889] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39889,1689444902165' 2023-07-15 18:15:04,296 DEBUG [RS:1;jenkins-hbase4:39889] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:04,296 DEBUG [RS:1;jenkins-hbase4:39889] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:04,297 DEBUG [RS:1;jenkins-hbase4:39889] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:04,297 INFO [RS:1;jenkins-hbase4:39889] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 18:15:04,297 INFO [RS:1;jenkins-hbase4:39889] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 18:15:04,298 INFO [RS:0;jenkins-hbase4:44901] regionserver.Replication(203): jenkins-hbase4.apache.org,44901,1689444902054 started 2023-07-15 18:15:04,298 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44901,1689444902054, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44901, sessionid=0x1016a31dca10001 2023-07-15 18:15:04,298 DEBUG [RS:0;jenkins-hbase4:44901] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:04,298 DEBUG [RS:0;jenkins-hbase4:44901] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:04,299 DEBUG [RS:0;jenkins-hbase4:44901] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44901,1689444902054' 2023-07-15 18:15:04,300 DEBUG [RS:0;jenkins-hbase4:44901] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:04,300 DEBUG [RS:0;jenkins-hbase4:44901] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:04,301 DEBUG [RS:0;jenkins-hbase4:44901] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:04,301 DEBUG [RS:0;jenkins-hbase4:44901] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:04,301 DEBUG [RS:0;jenkins-hbase4:44901] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:04,301 DEBUG [RS:0;jenkins-hbase4:44901] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44901,1689444902054' 2023-07-15 18:15:04,301 DEBUG [RS:0;jenkins-hbase4:44901] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:04,302 DEBUG [RS:0;jenkins-hbase4:44901] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:04,302 DEBUG [RS:0;jenkins-hbase4:44901] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:04,303 INFO [RS:0;jenkins-hbase4:44901] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 18:15:04,303 INFO [RS:0;jenkins-hbase4:44901] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 18:15:04,306 INFO [RS:2;jenkins-hbase4:40191] regionserver.Replication(203): jenkins-hbase4.apache.org,40191,1689444902237 started 2023-07-15 18:15:04,306 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40191,1689444902237, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40191, sessionid=0x1016a31dca10003 2023-07-15 18:15:04,306 DEBUG [RS:2;jenkins-hbase4:40191] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:04,306 DEBUG [RS:2;jenkins-hbase4:40191] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,306 DEBUG [RS:2;jenkins-hbase4:40191] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40191,1689444902237' 2023-07-15 18:15:04,306 DEBUG [RS:2;jenkins-hbase4:40191] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:04,307 DEBUG [RS:2;jenkins-hbase4:40191] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:04,308 DEBUG [RS:2;jenkins-hbase4:40191] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:04,308 DEBUG [RS:2;jenkins-hbase4:40191] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:04,308 DEBUG [RS:2;jenkins-hbase4:40191] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,308 DEBUG [RS:2;jenkins-hbase4:40191] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40191,1689444902237' 2023-07-15 18:15:04,308 DEBUG [RS:2;jenkins-hbase4:40191] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:04,308 DEBUG [RS:2;jenkins-hbase4:40191] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:04,309 DEBUG [RS:2;jenkins-hbase4:40191] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:04,309 INFO [RS:2;jenkins-hbase4:40191] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 18:15:04,309 INFO [RS:2;jenkins-hbase4:40191] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 18:15:04,394 DEBUG [jenkins-hbase4:41169] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-15 18:15:04,410 INFO [RS:1;jenkins-hbase4:39889] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39889%2C1689444902165, suffix=, logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,39889,1689444902165, archiveDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs, maxLogs=32 2023-07-15 18:15:04,410 INFO [RS:0;jenkins-hbase4:44901] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44901%2C1689444902054, suffix=, logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,44901,1689444902054, archiveDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs, maxLogs=32 2023-07-15 18:15:04,413 DEBUG [jenkins-hbase4:41169] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:04,415 DEBUG [jenkins-hbase4:41169] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:04,415 DEBUG [jenkins-hbase4:41169] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:04,415 INFO [RS:2;jenkins-hbase4:40191] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40191%2C1689444902237, suffix=, logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,40191,1689444902237, archiveDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs, maxLogs=32 2023-07-15 18:15:04,415 DEBUG [jenkins-hbase4:41169] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:04,415 DEBUG [jenkins-hbase4:41169] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:04,420 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40191,1689444902237, state=OPENING 2023-07-15 18:15:04,432 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-15 18:15:04,435 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:04,436 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:04,451 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:04,463 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK] 2023-07-15 18:15:04,464 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK] 2023-07-15 18:15:04,464 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK] 2023-07-15 18:15:04,464 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK] 2023-07-15 18:15:04,468 WARN [ReadOnlyZKClient-127.0.0.1:54099@0x4f904202] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-15 18:15:04,470 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK] 2023-07-15 18:15:04,470 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK] 2023-07-15 18:15:04,471 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK] 2023-07-15 18:15:04,471 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK] 2023-07-15 18:15:04,472 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK] 2023-07-15 18:15:04,496 INFO [RS:0;jenkins-hbase4:44901] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,44901,1689444902054/jenkins-hbase4.apache.org%2C44901%2C1689444902054.1689444904420 2023-07-15 18:15:04,497 INFO [RS:2;jenkins-hbase4:40191] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,40191,1689444902237/jenkins-hbase4.apache.org%2C40191%2C1689444902237.1689444904420 2023-07-15 18:15:04,498 INFO [RS:1;jenkins-hbase4:39889] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,39889,1689444902165/jenkins-hbase4.apache.org%2C39889%2C1689444902165.1689444904420 2023-07-15 18:15:04,498 DEBUG [RS:0;jenkins-hbase4:44901] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK], DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK], DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK]] 2023-07-15 18:15:04,499 DEBUG [RS:1;jenkins-hbase4:39889] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK], DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK], DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK]] 2023-07-15 18:15:04,502 DEBUG [RS:2;jenkins-hbase4:40191] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK], DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK], DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK]] 2023-07-15 18:15:04,510 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:04,514 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59412, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:04,515 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40191] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:59412 deadline: 1689444964515, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,677 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:04,683 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:04,690 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59424, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:04,706 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 18:15:04,707 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:04,711 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40191%2C1689444902237.meta, suffix=.meta, logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,40191,1689444902237, archiveDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs, maxLogs=32 2023-07-15 18:15:04,738 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK] 2023-07-15 18:15:04,739 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK] 2023-07-15 18:15:04,743 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK] 2023-07-15 18:15:04,756 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,40191,1689444902237/jenkins-hbase4.apache.org%2C40191%2C1689444902237.meta.1689444904713.meta 2023-07-15 18:15:04,757 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK], DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK], DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK]] 2023-07-15 18:15:04,757 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:04,759 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 18:15:04,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 18:15:04,764 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 18:15:04,770 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 18:15:04,770 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:04,770 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 18:15:04,770 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 18:15:04,773 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 18:15:04,776 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info 2023-07-15 18:15:04,776 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info 2023-07-15 18:15:04,776 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 18:15:04,777 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:04,778 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 18:15:04,784 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:04,784 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:04,785 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 18:15:04,787 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:04,788 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 18:15:04,789 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table 2023-07-15 18:15:04,790 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table 2023-07-15 18:15:04,790 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 18:15:04,791 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:04,793 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740 2023-07-15 18:15:04,796 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740 2023-07-15 18:15:04,801 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 18:15:04,804 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 18:15:04,807 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10233686240, jitterRate=-0.04691369831562042}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 18:15:04,807 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 18:15:04,824 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689444904672 2023-07-15 18:15:04,852 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 18:15:04,855 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 18:15:04,855 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40191,1689444902237, state=OPEN 2023-07-15 18:15:04,884 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 18:15:04,884 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:04,890 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-15 18:15:04,891 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40191,1689444902237 in 426 msec 2023-07-15 18:15:04,897 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-15 18:15:04,897 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 676 msec 2023-07-15 18:15:04,904 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.2800 sec 2023-07-15 18:15:04,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689444904904, completionTime=-1 2023-07-15 18:15:04,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-15 18:15:04,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-15 18:15:04,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-15 18:15:04,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689444964976 2023-07-15 18:15:04,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689445024976 2023-07-15 18:15:04,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 71 msec 2023-07-15 18:15:04,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41169,1689444900240-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41169,1689444900240-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:04,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41169,1689444900240-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:05,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41169, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:05,001 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:05,010 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-15 18:15:05,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-15 18:15:05,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:05,044 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:05,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-15 18:15:05,054 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-15 18:15:05,056 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:05,059 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:05,060 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:05,061 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:05,073 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,073 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,077 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0 empty. 2023-07-15 18:15:05,077 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437 empty. 2023-07-15 18:15:05,078 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,078 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,078 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-15 18:15:05,078 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-15 18:15:05,135 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:05,137 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:05,138 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 82724fed0e99f8e969020c075e232437, NAME => 'hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:05,138 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1c87ff5cd30bfdf1c603a34ec3bb14c0, NAME => 'hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:05,194 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:05,195 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1c87ff5cd30bfdf1c603a34ec3bb14c0, disabling compactions & flushes 2023-07-15 18:15:05,195 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:05,195 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:05,195 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. after waiting 0 ms 2023-07-15 18:15:05,195 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:05,195 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:05,195 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1c87ff5cd30bfdf1c603a34ec3bb14c0: 2023-07-15 18:15:05,203 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:05,203 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:05,203 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 82724fed0e99f8e969020c075e232437, disabling compactions & flushes 2023-07-15 18:15:05,204 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:05,204 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:05,204 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. after waiting 0 ms 2023-07-15 18:15:05,204 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:05,204 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:05,204 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 82724fed0e99f8e969020c075e232437: 2023-07-15 18:15:05,208 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:05,223 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444905206"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444905206"}]},"ts":"1689444905206"} 2023-07-15 18:15:05,223 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444905209"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444905209"}]},"ts":"1689444905209"} 2023-07-15 18:15:05,258 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:05,262 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:05,263 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:05,266 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:05,269 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444905266"}]},"ts":"1689444905266"} 2023-07-15 18:15:05,269 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444905263"}]},"ts":"1689444905263"} 2023-07-15 18:15:05,274 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-15 18:15:05,277 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-15 18:15:05,282 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:05,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:05,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:05,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:05,283 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:05,283 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:05,284 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:05,284 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:05,284 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:05,284 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:05,285 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1c87ff5cd30bfdf1c603a34ec3bb14c0, ASSIGN}] 2023-07-15 18:15:05,285 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=82724fed0e99f8e969020c075e232437, ASSIGN}] 2023-07-15 18:15:05,288 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=82724fed0e99f8e969020c075e232437, ASSIGN 2023-07-15 18:15:05,288 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1c87ff5cd30bfdf1c603a34ec3bb14c0, ASSIGN 2023-07-15 18:15:05,291 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1c87ff5cd30bfdf1c603a34ec3bb14c0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:05,291 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=82724fed0e99f8e969020c075e232437, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:05,292 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-15 18:15:05,294 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=82724fed0e99f8e969020c075e232437, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:05,294 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=1c87ff5cd30bfdf1c603a34ec3bb14c0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:05,294 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444905293"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444905293"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444905293"}]},"ts":"1689444905293"} 2023-07-15 18:15:05,294 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444905293"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444905293"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444905293"}]},"ts":"1689444905293"} 2023-07-15 18:15:05,297 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure 1c87ff5cd30bfdf1c603a34ec3bb14c0, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:05,298 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure 82724fed0e99f8e969020c075e232437, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:05,453 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:05,454 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:05,457 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47840, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:05,461 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:05,461 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1c87ff5cd30bfdf1c603a34ec3bb14c0, NAME => 'hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:05,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:05,463 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:05,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 82724fed0e99f8e969020c075e232437, NAME => 'hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:05,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 18:15:05,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. service=MultiRowMutationService 2023-07-15 18:15:05,464 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-15 18:15:05,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:05,465 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,465 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,466 INFO [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,468 INFO [StoreOpener-82724fed0e99f8e969020c075e232437-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,470 DEBUG [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/info 2023-07-15 18:15:05,470 DEBUG [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/info 2023-07-15 18:15:05,470 DEBUG [StoreOpener-82724fed0e99f8e969020c075e232437-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437/m 2023-07-15 18:15:05,470 DEBUG [StoreOpener-82724fed0e99f8e969020c075e232437-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437/m 2023-07-15 18:15:05,471 INFO [StoreOpener-82724fed0e99f8e969020c075e232437-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 82724fed0e99f8e969020c075e232437 columnFamilyName m 2023-07-15 18:15:05,471 INFO [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1c87ff5cd30bfdf1c603a34ec3bb14c0 columnFamilyName info 2023-07-15 18:15:05,472 INFO [StoreOpener-82724fed0e99f8e969020c075e232437-1] regionserver.HStore(310): Store=82724fed0e99f8e969020c075e232437/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:05,472 INFO [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] regionserver.HStore(310): Store=1c87ff5cd30bfdf1c603a34ec3bb14c0/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:05,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,478 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:05,485 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 82724fed0e99f8e969020c075e232437 2023-07-15 18:15:05,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:05,489 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1c87ff5cd30bfdf1c603a34ec3bb14c0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9834379200, jitterRate=-0.08410206437110901}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:05,489 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1c87ff5cd30bfdf1c603a34ec3bb14c0: 2023-07-15 18:15:05,492 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0., pid=8, masterSystemTime=1689444905450 2023-07-15 18:15:05,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:05,494 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 82724fed0e99f8e969020c075e232437; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@38fbb150, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:05,494 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 82724fed0e99f8e969020c075e232437: 2023-07-15 18:15:05,497 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:05,497 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437., pid=9, masterSystemTime=1689444905453 2023-07-15 18:15:05,498 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:05,501 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=1c87ff5cd30bfdf1c603a34ec3bb14c0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:05,502 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444905500"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444905500"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444905500"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444905500"}]},"ts":"1689444905500"} 2023-07-15 18:15:05,503 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:05,504 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:05,505 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=82724fed0e99f8e969020c075e232437, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:05,506 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444905505"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444905505"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444905505"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444905505"}]},"ts":"1689444905505"} 2023-07-15 18:15:05,524 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-15 18:15:05,524 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure 1c87ff5cd30bfdf1c603a34ec3bb14c0, server=jenkins-hbase4.apache.org,40191,1689444902237 in 214 msec 2023-07-15 18:15:05,526 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-15 18:15:05,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure 82724fed0e99f8e969020c075e232437, server=jenkins-hbase4.apache.org,44901,1689444902054 in 215 msec 2023-07-15 18:15:05,530 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-15 18:15:05,530 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1c87ff5cd30bfdf1c603a34ec3bb14c0, ASSIGN in 239 msec 2023-07-15 18:15:05,533 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:05,533 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444905533"}]},"ts":"1689444905533"} 2023-07-15 18:15:05,536 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-15 18:15:05,536 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=82724fed0e99f8e969020c075e232437, ASSIGN in 242 msec 2023-07-15 18:15:05,537 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-15 18:15:05,537 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:05,538 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444905538"}]},"ts":"1689444905538"} 2023-07-15 18:15:05,543 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:05,544 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-15 18:15:05,549 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 511 msec 2023-07-15 18:15:05,549 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:05,553 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-15 18:15:05,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 505 msec 2023-07-15 18:15:05,555 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:05,555 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:05,592 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:05,596 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47854, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:05,601 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-15 18:15:05,601 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-15 18:15:05,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-15 18:15:05,639 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:05,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 58 msec 2023-07-15 18:15:05,659 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-15 18:15:05,677 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:05,685 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:05,685 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:05,686 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 24 msec 2023-07-15 18:15:05,688 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 18:15:05,695 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-15 18:15:05,697 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-15 18:15:05,700 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-15 18:15:05,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.309sec 2023-07-15 18:15:05,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-15 18:15:05,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-15 18:15:05,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-15 18:15:05,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41169,1689444900240-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-15 18:15:05,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41169,1689444900240-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-15 18:15:05,719 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-15 18:15:05,742 DEBUG [Listener at localhost/40085] zookeeper.ReadOnlyZKClient(139): Connect 0x16d27b18 to 127.0.0.1:54099 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:05,752 DEBUG [Listener at localhost/40085] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22907d69, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:05,770 DEBUG [hconnection-0x3c71af44-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:05,789 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59440, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:05,800 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:05,801 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:05,812 DEBUG [Listener at localhost/40085] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-15 18:15:05,816 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42212, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-15 18:15:05,833 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-15 18:15:05,833 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:05,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-15 18:15:05,841 DEBUG [Listener at localhost/40085] zookeeper.ReadOnlyZKClient(139): Connect 0x778b9fa2 to 127.0.0.1:54099 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:05,847 DEBUG [Listener at localhost/40085] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4bd699e5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:05,847 INFO [Listener at localhost/40085] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54099 2023-07-15 18:15:05,851 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:05,852 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016a31dca1000a connected 2023-07-15 18:15:05,916 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=418, OpenFileDescriptor=691, MaxFileDescriptor=60000, SystemLoadAverage=422, ProcessCount=172, AvailableMemoryMB=4087 2023-07-15 18:15:05,919 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-15 18:15:05,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:05,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:06,044 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-15 18:15:06,063 INFO [Listener at localhost/40085] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:06,063 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:06,063 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:06,063 INFO [Listener at localhost/40085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:06,063 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:06,064 INFO [Listener at localhost/40085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:06,064 INFO [Listener at localhost/40085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:06,068 INFO [Listener at localhost/40085] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37155 2023-07-15 18:15:06,068 INFO [Listener at localhost/40085] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:06,070 DEBUG [Listener at localhost/40085] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:06,072 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:06,077 INFO [Listener at localhost/40085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:06,083 INFO [Listener at localhost/40085] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37155 connecting to ZooKeeper ensemble=127.0.0.1:54099 2023-07-15 18:15:06,090 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:371550x0, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:06,092 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(162): regionserver:371550x0, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 18:15:06,093 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(162): regionserver:371550x0, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-15 18:15:06,095 DEBUG [Listener at localhost/40085] zookeeper.ZKUtil(164): regionserver:371550x0, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:06,095 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37155-0x1016a31dca1000b connected 2023-07-15 18:15:06,100 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37155 2023-07-15 18:15:06,102 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37155 2023-07-15 18:15:06,103 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37155 2023-07-15 18:15:06,104 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37155 2023-07-15 18:15:06,104 DEBUG [Listener at localhost/40085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37155 2023-07-15 18:15:06,106 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:06,107 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:06,107 INFO [Listener at localhost/40085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:06,107 INFO [Listener at localhost/40085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:06,107 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:06,107 INFO [Listener at localhost/40085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:06,108 INFO [Listener at localhost/40085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:06,108 INFO [Listener at localhost/40085] http.HttpServer(1146): Jetty bound to port 43863 2023-07-15 18:15:06,108 INFO [Listener at localhost/40085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:06,113 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:06,114 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5cf91eb3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:06,114 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:06,114 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@31b086ec{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:06,122 INFO [Listener at localhost/40085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:06,123 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:06,123 INFO [Listener at localhost/40085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:06,124 INFO [Listener at localhost/40085] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 18:15:06,126 INFO [Listener at localhost/40085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:06,127 INFO [Listener at localhost/40085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@42399250{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:06,128 INFO [Listener at localhost/40085] server.AbstractConnector(333): Started ServerConnector@57fbd536{HTTP/1.1, (http/1.1)}{0.0.0.0:43863} 2023-07-15 18:15:06,128 INFO [Listener at localhost/40085] server.Server(415): Started @11924ms 2023-07-15 18:15:06,133 INFO [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(951): ClusterId : 6555c413-097f-45fa-9eea-583e5f16d41b 2023-07-15 18:15:06,133 DEBUG [RS:3;jenkins-hbase4:37155] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:06,136 DEBUG [RS:3;jenkins-hbase4:37155] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:06,136 DEBUG [RS:3;jenkins-hbase4:37155] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:06,138 DEBUG [RS:3;jenkins-hbase4:37155] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:06,140 DEBUG [RS:3;jenkins-hbase4:37155] zookeeper.ReadOnlyZKClient(139): Connect 0x7095516b to 127.0.0.1:54099 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:06,145 DEBUG [RS:3;jenkins-hbase4:37155] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e52b109, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:06,145 DEBUG [RS:3;jenkins-hbase4:37155] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a6d3a1b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:06,155 DEBUG [RS:3;jenkins-hbase4:37155] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:37155 2023-07-15 18:15:06,155 INFO [RS:3;jenkins-hbase4:37155] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:06,155 INFO [RS:3;jenkins-hbase4:37155] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:06,155 DEBUG [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:06,156 INFO [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41169,1689444900240 with isa=jenkins-hbase4.apache.org/172.31.14.131:37155, startcode=1689444906062 2023-07-15 18:15:06,156 DEBUG [RS:3;jenkins-hbase4:37155] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:06,160 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33775, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:06,160 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41169] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,160 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:06,161 DEBUG [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955 2023-07-15 18:15:06,161 DEBUG [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44585 2023-07-15 18:15:06,161 DEBUG [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38831 2023-07-15 18:15:06,167 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:06,167 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:06,167 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:06,167 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:06,167 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:06,168 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37155,1689444906062] 2023-07-15 18:15:06,169 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:06,169 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 18:15:06,169 DEBUG [RS:3;jenkins-hbase4:37155] zookeeper.ZKUtil(162): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,169 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:06,169 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:06,169 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:06,173 WARN [RS:3;jenkins-hbase4:37155] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:06,173 INFO [RS:3;jenkins-hbase4:37155] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:06,173 DEBUG [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,176 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41169,1689444900240] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-15 18:15:06,176 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:06,177 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:06,176 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:06,177 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:06,177 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:06,178 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,179 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,182 DEBUG [RS:3;jenkins-hbase4:37155] zookeeper.ZKUtil(162): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:06,182 DEBUG [RS:3;jenkins-hbase4:37155] zookeeper.ZKUtil(162): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:06,183 DEBUG [RS:3;jenkins-hbase4:37155] zookeeper.ZKUtil(162): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:06,183 DEBUG [RS:3;jenkins-hbase4:37155] zookeeper.ZKUtil(162): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,185 DEBUG [RS:3;jenkins-hbase4:37155] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:06,185 INFO [RS:3;jenkins-hbase4:37155] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:06,190 INFO [RS:3;jenkins-hbase4:37155] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:06,191 INFO [RS:3;jenkins-hbase4:37155] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:06,191 INFO [RS:3;jenkins-hbase4:37155] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:06,194 INFO [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:06,197 INFO [RS:3;jenkins-hbase4:37155] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:06,197 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,197 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,197 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,197 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,198 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,198 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:06,198 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,198 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,198 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,198 DEBUG [RS:3;jenkins-hbase4:37155] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:06,200 INFO [RS:3;jenkins-hbase4:37155] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:06,200 INFO [RS:3;jenkins-hbase4:37155] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:06,200 INFO [RS:3;jenkins-hbase4:37155] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:06,213 INFO [RS:3;jenkins-hbase4:37155] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:06,213 INFO [RS:3;jenkins-hbase4:37155] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37155,1689444906062-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:06,239 INFO [RS:3;jenkins-hbase4:37155] regionserver.Replication(203): jenkins-hbase4.apache.org,37155,1689444906062 started 2023-07-15 18:15:06,239 INFO [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37155,1689444906062, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37155, sessionid=0x1016a31dca1000b 2023-07-15 18:15:06,239 DEBUG [RS:3;jenkins-hbase4:37155] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:06,239 DEBUG [RS:3;jenkins-hbase4:37155] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,239 DEBUG [RS:3;jenkins-hbase4:37155] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37155,1689444906062' 2023-07-15 18:15:06,239 DEBUG [RS:3;jenkins-hbase4:37155] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:06,240 DEBUG [RS:3;jenkins-hbase4:37155] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:06,240 DEBUG [RS:3;jenkins-hbase4:37155] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:06,240 DEBUG [RS:3;jenkins-hbase4:37155] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:06,240 DEBUG [RS:3;jenkins-hbase4:37155] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:06,240 DEBUG [RS:3;jenkins-hbase4:37155] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37155,1689444906062' 2023-07-15 18:15:06,240 DEBUG [RS:3;jenkins-hbase4:37155] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:06,241 DEBUG [RS:3;jenkins-hbase4:37155] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:06,241 DEBUG [RS:3;jenkins-hbase4:37155] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:06,241 INFO [RS:3;jenkins-hbase4:37155] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 18:15:06,241 INFO [RS:3;jenkins-hbase4:37155] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 18:15:06,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:06,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:06,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:06,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:06,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:06,259 DEBUG [hconnection-0x3d1b204c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:06,264 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59452, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:06,270 DEBUG [hconnection-0x3d1b204c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:06,272 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47860, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:06,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:06,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:06,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:06,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:06,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:42212 deadline: 1689446106293, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:06,295 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:06,298 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:06,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:06,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:06,300 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:06,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:06,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:06,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:06,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:06,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:06,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:06,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:06,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:06,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:06,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:06,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:06,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:06,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889] to rsgroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:06,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:06,344 INFO [RS:3;jenkins-hbase4:37155] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37155%2C1689444906062, suffix=, logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,37155,1689444906062, archiveDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs, maxLogs=32 2023-07-15 18:15:06,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:06,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:06,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:06,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 18:15:06,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165] are moved back to default 2023-07-15 18:15:06,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:06,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:06,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:06,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:06,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:06,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:06,377 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK] 2023-07-15 18:15:06,379 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK] 2023-07-15 18:15:06,380 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK] 2023-07-15 18:15:06,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:06,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:06,394 INFO [RS:3;jenkins-hbase4:37155] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,37155,1689444906062/jenkins-hbase4.apache.org%2C37155%2C1689444906062.1689444906348 2023-07-15 18:15:06,395 DEBUG [RS:3;jenkins-hbase4:37155] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK], DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK], DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK]] 2023-07-15 18:15:06,398 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:06,403 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:06,404 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:06,405 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:06,406 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:06,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-15 18:15:06,419 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:06,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:06,432 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:06,432 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:06,432 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:06,432 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:06,434 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:06,434 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 empty. 2023-07-15 18:15:06,434 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 empty. 2023-07-15 18:15:06,434 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 empty. 2023-07-15 18:15:06,435 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 empty. 2023-07-15 18:15:06,435 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f empty. 2023-07-15 18:15:06,435 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:06,436 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:06,436 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:06,436 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:06,436 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:06,436 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-15 18:15:06,471 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:06,473 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e7dcb99f03f5499042992813a6c11816, NAME => 'Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:06,473 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => f93e28113e0db3bc7de259bc766d8660, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:06,474 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 36c6e89b8a6dbcb15b716f1027b1d05f, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:06,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:06,569 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:06,575 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing f93e28113e0db3bc7de259bc766d8660, disabling compactions & flushes 2023-07-15 18:15:06,576 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:06,576 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:06,576 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. after waiting 0 ms 2023-07-15 18:15:06,576 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:06,576 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:06,576 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for f93e28113e0db3bc7de259bc766d8660: 2023-07-15 18:15:06,576 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 687d0ffc1c561b5c571b9ae6cf917b74, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:06,569 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:06,577 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing e7dcb99f03f5499042992813a6c11816, disabling compactions & flushes 2023-07-15 18:15:06,577 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:06,577 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:06,577 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. after waiting 0 ms 2023-07-15 18:15:06,577 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:06,577 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:06,577 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e7dcb99f03f5499042992813a6c11816: 2023-07-15 18:15:06,578 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => bf204421b886b079a47a6c35915b7fa0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:06,579 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:06,580 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 36c6e89b8a6dbcb15b716f1027b1d05f, disabling compactions & flushes 2023-07-15 18:15:06,580 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:06,580 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:06,580 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. after waiting 0 ms 2023-07-15 18:15:06,580 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:06,580 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:06,580 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 36c6e89b8a6dbcb15b716f1027b1d05f: 2023-07-15 18:15:06,624 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:06,627 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing bf204421b886b079a47a6c35915b7fa0, disabling compactions & flushes 2023-07-15 18:15:06,627 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:06,627 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:06,627 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. after waiting 0 ms 2023-07-15 18:15:06,627 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:06,627 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:06,627 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for bf204421b886b079a47a6c35915b7fa0: 2023-07-15 18:15:06,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:06,631 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 687d0ffc1c561b5c571b9ae6cf917b74, disabling compactions & flushes 2023-07-15 18:15:06,631 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:06,631 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:06,631 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. after waiting 0 ms 2023-07-15 18:15:06,631 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:06,631 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:06,631 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 687d0ffc1c561b5c571b9ae6cf917b74: 2023-07-15 18:15:06,636 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:06,637 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444906637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444906637"}]},"ts":"1689444906637"} 2023-07-15 18:15:06,637 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444906637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444906637"}]},"ts":"1689444906637"} 2023-07-15 18:15:06,637 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444906637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444906637"}]},"ts":"1689444906637"} 2023-07-15 18:15:06,637 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444906637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444906637"}]},"ts":"1689444906637"} 2023-07-15 18:15:06,638 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444906637"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444906637"}]},"ts":"1689444906637"} 2023-07-15 18:15:06,696 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-15 18:15:06,698 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:06,699 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444906699"}]},"ts":"1689444906699"} 2023-07-15 18:15:06,702 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-15 18:15:06,714 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:06,714 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:06,714 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:06,714 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:06,715 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, ASSIGN}] 2023-07-15 18:15:06,720 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, ASSIGN 2023-07-15 18:15:06,721 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:06,723 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, ASSIGN 2023-07-15 18:15:06,724 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, ASSIGN 2023-07-15 18:15:06,724 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, ASSIGN 2023-07-15 18:15:06,725 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, ASSIGN 2023-07-15 18:15:06,728 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:06,729 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:06,730 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:06,730 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:06,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:06,873 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-15 18:15:06,876 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=36c6e89b8a6dbcb15b716f1027b1d05f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:06,876 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=bf204421b886b079a47a6c35915b7fa0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:06,877 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444906876"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444906876"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444906876"}]},"ts":"1689444906876"} 2023-07-15 18:15:06,877 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444906876"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444906876"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444906876"}]},"ts":"1689444906876"} 2023-07-15 18:15:06,876 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=f93e28113e0db3bc7de259bc766d8660, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:06,877 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444906876"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444906876"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444906876"}]},"ts":"1689444906876"} 2023-07-15 18:15:06,876 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=687d0ffc1c561b5c571b9ae6cf917b74, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:06,878 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444906876"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444906876"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444906876"}]},"ts":"1689444906876"} 2023-07-15 18:15:06,876 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e7dcb99f03f5499042992813a6c11816, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:06,878 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444906876"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444906876"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444906876"}]},"ts":"1689444906876"} 2023-07-15 18:15:06,884 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure bf204421b886b079a47a6c35915b7fa0, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:06,890 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=14, state=RUNNABLE; OpenRegionProcedure f93e28113e0db3bc7de259bc766d8660, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:06,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=16, state=RUNNABLE; OpenRegionProcedure 687d0ffc1c561b5c571b9ae6cf917b74, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:06,894 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=13, state=RUNNABLE; OpenRegionProcedure e7dcb99f03f5499042992813a6c11816, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:06,897 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=15, state=RUNNABLE; OpenRegionProcedure 36c6e89b8a6dbcb15b716f1027b1d05f, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:07,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:07,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:07,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:07,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e7dcb99f03f5499042992813a6c11816, NAME => 'Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-15 18:15:07,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f93e28113e0db3bc7de259bc766d8660, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-15 18:15:07,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:07,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:07,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,071 INFO [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,073 DEBUG [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/f 2023-07-15 18:15:07,073 DEBUG [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/f 2023-07-15 18:15:07,074 INFO [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f93e28113e0db3bc7de259bc766d8660 columnFamilyName f 2023-07-15 18:15:07,076 INFO [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] regionserver.HStore(310): Store=f93e28113e0db3bc7de259bc766d8660/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:07,076 INFO [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,098 DEBUG [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/f 2023-07-15 18:15:07,099 DEBUG [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/f 2023-07-15 18:15:07,099 INFO [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e7dcb99f03f5499042992813a6c11816 columnFamilyName f 2023-07-15 18:15:07,099 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:07,100 INFO [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] regionserver.HStore(310): Store=e7dcb99f03f5499042992813a6c11816/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:07,100 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f93e28113e0db3bc7de259bc766d8660; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11123960480, jitterRate=0.03599955141544342}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:07,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f93e28113e0db3bc7de259bc766d8660: 2023-07-15 18:15:07,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,107 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,107 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660., pid=19, masterSystemTime=1689444907053 2023-07-15 18:15:07,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:07,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:07,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:07,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 687d0ffc1c561b5c571b9ae6cf917b74, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-15 18:15:07,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:07,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,116 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=f93e28113e0db3bc7de259bc766d8660, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:07,117 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907116"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444907116"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444907116"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444907116"}]},"ts":"1689444907116"} 2023-07-15 18:15:07,117 INFO [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,125 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=14 2023-07-15 18:15:07,131 DEBUG [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/f 2023-07-15 18:15:07,128 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, ASSIGN in 410 msec 2023-07-15 18:15:07,132 DEBUG [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/f 2023-07-15 18:15:07,132 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=14, state=SUCCESS; OpenRegionProcedure f93e28113e0db3bc7de259bc766d8660, server=jenkins-hbase4.apache.org,44901,1689444902054 in 231 msec 2023-07-15 18:15:07,132 INFO [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 687d0ffc1c561b5c571b9ae6cf917b74 columnFamilyName f 2023-07-15 18:15:07,133 INFO [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] regionserver.HStore(310): Store=687d0ffc1c561b5c571b9ae6cf917b74/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:07,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,137 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:07,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,138 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e7dcb99f03f5499042992813a6c11816; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10153698560, jitterRate=-0.054363131523132324}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:07,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e7dcb99f03f5499042992813a6c11816: 2023-07-15 18:15:07,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816., pid=21, masterSystemTime=1689444907037 2023-07-15 18:15:07,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:07,146 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:07,146 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:07,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bf204421b886b079a47a6c35915b7fa0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-15 18:15:07,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:07,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,148 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e7dcb99f03f5499042992813a6c11816, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:07,148 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444907148"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444907148"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444907148"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444907148"}]},"ts":"1689444907148"} 2023-07-15 18:15:07,149 INFO [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,152 DEBUG [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/f 2023-07-15 18:15:07,152 DEBUG [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/f 2023-07-15 18:15:07,152 INFO [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bf204421b886b079a47a6c35915b7fa0 columnFamilyName f 2023-07-15 18:15:07,153 INFO [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] regionserver.HStore(310): Store=bf204421b886b079a47a6c35915b7fa0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:07,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=13 2023-07-15 18:15:07,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=13, state=SUCCESS; OpenRegionProcedure e7dcb99f03f5499042992813a6c11816, server=jenkins-hbase4.apache.org,40191,1689444902237 in 257 msec 2023-07-15 18:15:07,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,172 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, ASSIGN in 446 msec 2023-07-15 18:15:07,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:07,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:07,182 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bf204421b886b079a47a6c35915b7fa0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11534310400, jitterRate=0.07421636581420898}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:07,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bf204421b886b079a47a6c35915b7fa0: 2023-07-15 18:15:07,183 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 687d0ffc1c561b5c571b9ae6cf917b74; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10619524480, jitterRate=-0.010979712009429932}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:07,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 687d0ffc1c561b5c571b9ae6cf917b74: 2023-07-15 18:15:07,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0., pid=18, masterSystemTime=1689444907037 2023-07-15 18:15:07,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74., pid=20, masterSystemTime=1689444907053 2023-07-15 18:15:07,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:07,190 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:07,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:07,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36c6e89b8a6dbcb15b716f1027b1d05f, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-15 18:15:07,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:07,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,193 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=687d0ffc1c561b5c571b9ae6cf917b74, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:07,194 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907193"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444907193"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444907193"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444907193"}]},"ts":"1689444907193"} 2023-07-15 18:15:07,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:07,194 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:07,196 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=bf204421b886b079a47a6c35915b7fa0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:07,199 INFO [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,199 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444907196"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444907196"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444907196"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444907196"}]},"ts":"1689444907196"} 2023-07-15 18:15:07,205 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=16 2023-07-15 18:15:07,205 DEBUG [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/f 2023-07-15 18:15:07,207 DEBUG [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/f 2023-07-15 18:15:07,205 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=16, state=SUCCESS; OpenRegionProcedure 687d0ffc1c561b5c571b9ae6cf917b74, server=jenkins-hbase4.apache.org,44901,1689444902054 in 308 msec 2023-07-15 18:15:07,207 INFO [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36c6e89b8a6dbcb15b716f1027b1d05f columnFamilyName f 2023-07-15 18:15:07,209 INFO [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] regionserver.HStore(310): Store=36c6e89b8a6dbcb15b716f1027b1d05f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:07,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,212 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-15 18:15:07,212 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, ASSIGN in 490 msec 2023-07-15 18:15:07,212 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure bf204421b886b079a47a6c35915b7fa0, server=jenkins-hbase4.apache.org,40191,1689444902237 in 318 msec 2023-07-15 18:15:07,215 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, ASSIGN in 497 msec 2023-07-15 18:15:07,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:07,222 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 36c6e89b8a6dbcb15b716f1027b1d05f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10673982080, jitterRate=-0.005907952785491943}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:07,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 36c6e89b8a6dbcb15b716f1027b1d05f: 2023-07-15 18:15:07,223 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f., pid=22, masterSystemTime=1689444907053 2023-07-15 18:15:07,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:07,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:07,227 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=36c6e89b8a6dbcb15b716f1027b1d05f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:07,227 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907227"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444907227"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444907227"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444907227"}]},"ts":"1689444907227"} 2023-07-15 18:15:07,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=15 2023-07-15 18:15:07,233 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=15, state=SUCCESS; OpenRegionProcedure 36c6e89b8a6dbcb15b716f1027b1d05f, server=jenkins-hbase4.apache.org,44901,1689444902054 in 333 msec 2023-07-15 18:15:07,238 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-15 18:15:07,239 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, ASSIGN in 518 msec 2023-07-15 18:15:07,240 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:07,241 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444907240"}]},"ts":"1689444907240"} 2023-07-15 18:15:07,245 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-15 18:15:07,248 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:07,251 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 857 msec 2023-07-15 18:15:07,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:07,551 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-15 18:15:07,551 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-15 18:15:07,552 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:07,558 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-15 18:15:07,559 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:07,559 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-15 18:15:07,560 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:07,565 DEBUG [Listener at localhost/40085] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:07,569 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47670, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:07,572 DEBUG [Listener at localhost/40085] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:07,576 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50310, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:07,576 DEBUG [Listener at localhost/40085] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:07,578 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59466, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:07,579 DEBUG [Listener at localhost/40085] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:07,583 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47874, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:07,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:07,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:07,598 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:07,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:07,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:07,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region e7dcb99f03f5499042992813a6c11816 to RSGroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:07,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:07,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:07,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:07,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:07,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, REOPEN/MOVE 2023-07-15 18:15:07,622 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, REOPEN/MOVE 2023-07-15 18:15:07,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region f93e28113e0db3bc7de259bc766d8660 to RSGroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:07,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:07,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:07,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:07,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:07,624 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=e7dcb99f03f5499042992813a6c11816, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:07,624 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444907624"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907624"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907624"}]},"ts":"1689444907624"} 2023-07-15 18:15:07,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, REOPEN/MOVE 2023-07-15 18:15:07,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region 36c6e89b8a6dbcb15b716f1027b1d05f to RSGroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,626 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, REOPEN/MOVE 2023-07-15 18:15:07,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:07,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:07,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:07,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:07,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:07,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, REOPEN/MOVE 2023-07-15 18:15:07,631 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=f93e28113e0db3bc7de259bc766d8660, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:07,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region 687d0ffc1c561b5c571b9ae6cf917b74 to RSGroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,632 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, REOPEN/MOVE 2023-07-15 18:15:07,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:07,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:07,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:07,633 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; CloseRegionProcedure e7dcb99f03f5499042992813a6c11816, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:07,633 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907631"}]},"ts":"1689444907631"} 2023-07-15 18:15:07,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:07,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:07,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, REOPEN/MOVE 2023-07-15 18:15:07,636 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=36c6e89b8a6dbcb15b716f1027b1d05f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:07,637 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, REOPEN/MOVE 2023-07-15 18:15:07,637 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907636"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907636"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907636"}]},"ts":"1689444907636"} 2023-07-15 18:15:07,638 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=24, state=RUNNABLE; CloseRegionProcedure f93e28113e0db3bc7de259bc766d8660, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:07,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region bf204421b886b079a47a6c35915b7fa0 to RSGroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:07,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:07,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:07,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:07,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:07,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:07,639 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=687d0ffc1c561b5c571b9ae6cf917b74, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:07,639 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907639"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907639"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907639"}]},"ts":"1689444907639"} 2023-07-15 18:15:07,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, REOPEN/MOVE 2023-07-15 18:15:07,642 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, REOPEN/MOVE 2023-07-15 18:15:07,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1729975212, current retry=0 2023-07-15 18:15:07,643 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=25, state=RUNNABLE; CloseRegionProcedure 36c6e89b8a6dbcb15b716f1027b1d05f, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:07,646 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=bf204421b886b079a47a6c35915b7fa0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:07,646 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444907646"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907646"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907646"}]},"ts":"1689444907646"} 2023-07-15 18:15:07,648 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 687d0ffc1c561b5c571b9ae6cf917b74, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:07,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure bf204421b886b079a47a6c35915b7fa0, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:07,802 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,802 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,803 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bf204421b886b079a47a6c35915b7fa0, disabling compactions & flushes 2023-07-15 18:15:07,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f93e28113e0db3bc7de259bc766d8660, disabling compactions & flushes 2023-07-15 18:15:07,804 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:07,804 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:07,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:07,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:07,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. after waiting 0 ms 2023-07-15 18:15:07,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. after waiting 0 ms 2023-07-15 18:15:07,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:07,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:07,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:07,812 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:07,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f93e28113e0db3bc7de259bc766d8660: 2023-07-15 18:15:07,812 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f93e28113e0db3bc7de259bc766d8660 move to jenkins-hbase4.apache.org,37155,1689444906062 record at close sequenceid=2 2023-07-15 18:15:07,814 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:07,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:07,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:07,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bf204421b886b079a47a6c35915b7fa0: 2023-07-15 18:15:07,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bf204421b886b079a47a6c35915b7fa0 move to jenkins-hbase4.apache.org,37155,1689444906062 record at close sequenceid=2 2023-07-15 18:15:07,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,816 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=f93e28113e0db3bc7de259bc766d8660, regionState=CLOSED 2023-07-15 18:15:07,816 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907816"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444907816"}]},"ts":"1689444907816"} 2023-07-15 18:15:07,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 36c6e89b8a6dbcb15b716f1027b1d05f, disabling compactions & flushes 2023-07-15 18:15:07,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:07,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:07,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. after waiting 0 ms 2023-07-15 18:15:07,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:07,824 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:07,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e7dcb99f03f5499042992813a6c11816, disabling compactions & flushes 2023-07-15 18:15:07,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:07,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:07,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. after waiting 0 ms 2023-07-15 18:15:07,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:07,827 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=bf204421b886b079a47a6c35915b7fa0, regionState=CLOSED 2023-07-15 18:15:07,827 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444907827"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444907827"}]},"ts":"1689444907827"} 2023-07-15 18:15:07,830 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=24 2023-07-15 18:15:07,830 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; CloseRegionProcedure f93e28113e0db3bc7de259bc766d8660, server=jenkins-hbase4.apache.org,44901,1689444902054 in 186 msec 2023-07-15 18:15:07,832 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37155,1689444906062; forceNewPlan=false, retain=false 2023-07-15 18:15:07,834 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-15 18:15:07,834 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure bf204421b886b079a47a6c35915b7fa0, server=jenkins-hbase4.apache.org,40191,1689444902237 in 178 msec 2023-07-15 18:15:07,836 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37155,1689444906062; forceNewPlan=false, retain=false 2023-07-15 18:15:07,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:07,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:07,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:07,842 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 36c6e89b8a6dbcb15b716f1027b1d05f: 2023-07-15 18:15:07,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 36c6e89b8a6dbcb15b716f1027b1d05f move to jenkins-hbase4.apache.org,39889,1689444902165 record at close sequenceid=2 2023-07-15 18:15:07,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:07,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e7dcb99f03f5499042992813a6c11816: 2023-07-15 18:15:07,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e7dcb99f03f5499042992813a6c11816 move to jenkins-hbase4.apache.org,39889,1689444902165 record at close sequenceid=2 2023-07-15 18:15:07,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:07,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,848 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=36c6e89b8a6dbcb15b716f1027b1d05f, regionState=CLOSED 2023-07-15 18:15:07,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 687d0ffc1c561b5c571b9ae6cf917b74, disabling compactions & flushes 2023-07-15 18:15:07,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:07,848 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907848"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444907848"}]},"ts":"1689444907848"} 2023-07-15 18:15:07,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:07,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. after waiting 0 ms 2023-07-15 18:15:07,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:07,852 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:07,853 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=e7dcb99f03f5499042992813a6c11816, regionState=CLOSED 2023-07-15 18:15:07,853 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444907853"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444907853"}]},"ts":"1689444907853"} 2023-07-15 18:15:07,858 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=25 2023-07-15 18:15:07,858 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=25, state=SUCCESS; CloseRegionProcedure 36c6e89b8a6dbcb15b716f1027b1d05f, server=jenkins-hbase4.apache.org,44901,1689444902054 in 210 msec 2023-07-15 18:15:07,859 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-15 18:15:07,859 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39889,1689444902165; forceNewPlan=false, retain=false 2023-07-15 18:15:07,859 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; CloseRegionProcedure e7dcb99f03f5499042992813a6c11816, server=jenkins-hbase4.apache.org,40191,1689444902237 in 223 msec 2023-07-15 18:15:07,860 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39889,1689444902165; forceNewPlan=false, retain=false 2023-07-15 18:15:07,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:07,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:07,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 687d0ffc1c561b5c571b9ae6cf917b74: 2023-07-15 18:15:07,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 687d0ffc1c561b5c571b9ae6cf917b74 move to jenkins-hbase4.apache.org,37155,1689444906062 record at close sequenceid=2 2023-07-15 18:15:07,866 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:07,867 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=687d0ffc1c561b5c571b9ae6cf917b74, regionState=CLOSED 2023-07-15 18:15:07,867 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444907867"}]},"ts":"1689444907867"} 2023-07-15 18:15:07,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-15 18:15:07,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 687d0ffc1c561b5c571b9ae6cf917b74, server=jenkins-hbase4.apache.org,44901,1689444902054 in 221 msec 2023-07-15 18:15:07,873 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37155,1689444906062; forceNewPlan=false, retain=false 2023-07-15 18:15:07,983 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-15 18:15:07,984 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=36c6e89b8a6dbcb15b716f1027b1d05f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:07,984 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=bf204421b886b079a47a6c35915b7fa0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:07,984 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=e7dcb99f03f5499042992813a6c11816, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:07,985 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907983"}]},"ts":"1689444907983"} 2023-07-15 18:15:07,984 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=687d0ffc1c561b5c571b9ae6cf917b74, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:07,985 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444907983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907983"}]},"ts":"1689444907983"} 2023-07-15 18:15:07,985 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444907983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907983"}]},"ts":"1689444907983"} 2023-07-15 18:15:07,985 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907984"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907984"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907984"}]},"ts":"1689444907984"} 2023-07-15 18:15:07,985 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=f93e28113e0db3bc7de259bc766d8660, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:07,986 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444907985"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444907985"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444907985"}]},"ts":"1689444907985"} 2023-07-15 18:15:07,987 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=25, state=RUNNABLE; OpenRegionProcedure 36c6e89b8a6dbcb15b716f1027b1d05f, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:07,989 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=23, state=RUNNABLE; OpenRegionProcedure e7dcb99f03f5499042992813a6c11816, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:07,991 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=29, state=RUNNABLE; OpenRegionProcedure bf204421b886b079a47a6c35915b7fa0, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:07,993 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=27, state=RUNNABLE; OpenRegionProcedure 687d0ffc1c561b5c571b9ae6cf917b74, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:07,994 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=24, state=RUNNABLE; OpenRegionProcedure f93e28113e0db3bc7de259bc766d8660, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:08,141 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:08,141 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:08,144 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50324, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:08,146 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:08,146 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:08,147 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47678, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:08,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:08,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e7dcb99f03f5499042992813a6c11816, NAME => 'Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-15 18:15:08,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:08,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:08,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 687d0ffc1c561b5c571b9ae6cf917b74, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-15 18:15:08,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:08,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,152 INFO [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,153 DEBUG [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/f 2023-07-15 18:15:08,153 INFO [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,153 DEBUG [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/f 2023-07-15 18:15:08,154 INFO [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e7dcb99f03f5499042992813a6c11816 columnFamilyName f 2023-07-15 18:15:08,155 DEBUG [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/f 2023-07-15 18:15:08,155 DEBUG [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/f 2023-07-15 18:15:08,155 INFO [StoreOpener-e7dcb99f03f5499042992813a6c11816-1] regionserver.HStore(310): Store=e7dcb99f03f5499042992813a6c11816/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:08,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,157 INFO [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 687d0ffc1c561b5c571b9ae6cf917b74 columnFamilyName f 2023-07-15 18:15:08,158 INFO [StoreOpener-687d0ffc1c561b5c571b9ae6cf917b74-1] regionserver.HStore(310): Store=687d0ffc1c561b5c571b9ae6cf917b74/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:08,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e7dcb99f03f5499042992813a6c11816; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11713366240, jitterRate=0.0908922404050827}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:08,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e7dcb99f03f5499042992813a6c11816: 2023-07-15 18:15:08,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816., pid=34, masterSystemTime=1689444908140 2023-07-15 18:15:08,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 687d0ffc1c561b5c571b9ae6cf917b74; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11377946880, jitterRate=0.0596538782119751}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:08,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 687d0ffc1c561b5c571b9ae6cf917b74: 2023-07-15 18:15:08,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:08,170 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:08,170 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:08,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36c6e89b8a6dbcb15b716f1027b1d05f, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-15 18:15:08,170 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74., pid=36, masterSystemTime=1689444908146 2023-07-15 18:15:08,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:08,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,172 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=e7dcb99f03f5499042992813a6c11816, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:08,172 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444908172"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444908172"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444908172"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444908172"}]},"ts":"1689444908172"} 2023-07-15 18:15:08,173 INFO [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:08,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:08,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:08,175 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=687d0ffc1c561b5c571b9ae6cf917b74, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:08,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f93e28113e0db3bc7de259bc766d8660, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-15 18:15:08,175 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908175"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444908175"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444908175"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444908175"}]},"ts":"1689444908175"} 2023-07-15 18:15:08,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,176 DEBUG [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/f 2023-07-15 18:15:08,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:08,176 DEBUG [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/f 2023-07-15 18:15:08,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,177 INFO [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36c6e89b8a6dbcb15b716f1027b1d05f columnFamilyName f 2023-07-15 18:15:08,178 INFO [StoreOpener-36c6e89b8a6dbcb15b716f1027b1d05f-1] regionserver.HStore(310): Store=36c6e89b8a6dbcb15b716f1027b1d05f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:08,178 INFO [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,180 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=23 2023-07-15 18:15:08,181 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=23, state=SUCCESS; OpenRegionProcedure e7dcb99f03f5499042992813a6c11816, server=jenkins-hbase4.apache.org,39889,1689444902165 in 187 msec 2023-07-15 18:15:08,182 DEBUG [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/f 2023-07-15 18:15:08,182 DEBUG [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/f 2023-07-15 18:15:08,182 INFO [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f93e28113e0db3bc7de259bc766d8660 columnFamilyName f 2023-07-15 18:15:08,183 INFO [StoreOpener-f93e28113e0db3bc7de259bc766d8660-1] regionserver.HStore(310): Store=f93e28113e0db3bc7de259bc766d8660/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:08,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,185 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=27 2023-07-15 18:15:08,185 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, REOPEN/MOVE in 561 msec 2023-07-15 18:15:08,185 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=27, state=SUCCESS; OpenRegionProcedure 687d0ffc1c561b5c571b9ae6cf917b74, server=jenkins-hbase4.apache.org,37155,1689444906062 in 185 msec 2023-07-15 18:15:08,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,187 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, REOPEN/MOVE in 552 msec 2023-07-15 18:15:08,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 36c6e89b8a6dbcb15b716f1027b1d05f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11668232960, jitterRate=0.08668887615203857}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:08,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 36c6e89b8a6dbcb15b716f1027b1d05f: 2023-07-15 18:15:08,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f93e28113e0db3bc7de259bc766d8660; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9801566080, jitterRate=-0.08715802431106567}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:08,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f93e28113e0db3bc7de259bc766d8660: 2023-07-15 18:15:08,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f., pid=33, masterSystemTime=1689444908140 2023-07-15 18:15:08,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660., pid=37, masterSystemTime=1689444908146 2023-07-15 18:15:08,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:08,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:08,196 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=36c6e89b8a6dbcb15b716f1027b1d05f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:08,196 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908196"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444908196"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444908196"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444908196"}]},"ts":"1689444908196"} 2023-07-15 18:15:08,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:08,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:08,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:08,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bf204421b886b079a47a6c35915b7fa0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-15 18:15:08,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,198 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=f93e28113e0db3bc7de259bc766d8660, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:08,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:08,199 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908198"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444908198"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444908198"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444908198"}]},"ts":"1689444908198"} 2023-07-15 18:15:08,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,201 INFO [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,205 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=25 2023-07-15 18:15:08,205 DEBUG [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/f 2023-07-15 18:15:08,205 DEBUG [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/f 2023-07-15 18:15:08,205 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=25, state=SUCCESS; OpenRegionProcedure 36c6e89b8a6dbcb15b716f1027b1d05f, server=jenkins-hbase4.apache.org,39889,1689444902165 in 213 msec 2023-07-15 18:15:08,206 INFO [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bf204421b886b079a47a6c35915b7fa0 columnFamilyName f 2023-07-15 18:15:08,206 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=24 2023-07-15 18:15:08,206 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=24, state=SUCCESS; OpenRegionProcedure f93e28113e0db3bc7de259bc766d8660, server=jenkins-hbase4.apache.org,37155,1689444906062 in 209 msec 2023-07-15 18:15:08,207 INFO [StoreOpener-bf204421b886b079a47a6c35915b7fa0-1] regionserver.HStore(310): Store=bf204421b886b079a47a6c35915b7fa0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:08,210 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, REOPEN/MOVE in 579 msec 2023-07-15 18:15:08,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,211 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, REOPEN/MOVE in 582 msec 2023-07-15 18:15:08,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,217 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bf204421b886b079a47a6c35915b7fa0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10837373280, jitterRate=0.009309038519859314}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:08,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bf204421b886b079a47a6c35915b7fa0: 2023-07-15 18:15:08,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0., pid=35, masterSystemTime=1689444908146 2023-07-15 18:15:08,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:08,220 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:08,221 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=bf204421b886b079a47a6c35915b7fa0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:08,221 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444908221"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444908221"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444908221"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444908221"}]},"ts":"1689444908221"} 2023-07-15 18:15:08,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=29 2023-07-15 18:15:08,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=29, state=SUCCESS; OpenRegionProcedure bf204421b886b079a47a6c35915b7fa0, server=jenkins-hbase4.apache.org,37155,1689444906062 in 232 msec 2023-07-15 18:15:08,227 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, REOPEN/MOVE in 586 msec 2023-07-15 18:15:08,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-15 18:15:08,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1729975212. 2023-07-15 18:15:08,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:08,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:08,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:08,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:08,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:08,654 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:08,662 INFO [Listener at localhost/40085] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:08,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:08,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:08,681 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444908680"}]},"ts":"1689444908680"} 2023-07-15 18:15:08,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-15 18:15:08,682 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-15 18:15:08,684 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-15 18:15:08,686 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, UNASSIGN}] 2023-07-15 18:15:08,688 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, UNASSIGN 2023-07-15 18:15:08,688 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, UNASSIGN 2023-07-15 18:15:08,688 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, UNASSIGN 2023-07-15 18:15:08,688 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, UNASSIGN 2023-07-15 18:15:08,689 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, UNASSIGN 2023-07-15 18:15:08,689 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=f93e28113e0db3bc7de259bc766d8660, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:08,690 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908689"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444908689"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444908689"}]},"ts":"1689444908689"} 2023-07-15 18:15:08,691 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=e7dcb99f03f5499042992813a6c11816, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:08,691 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444908691"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444908691"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444908691"}]},"ts":"1689444908691"} 2023-07-15 18:15:08,692 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=36c6e89b8a6dbcb15b716f1027b1d05f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:08,692 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=687d0ffc1c561b5c571b9ae6cf917b74, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:08,692 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444908692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444908692"}]},"ts":"1689444908692"} 2023-07-15 18:15:08,692 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=bf204421b886b079a47a6c35915b7fa0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:08,692 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444908692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444908692"}]},"ts":"1689444908692"} 2023-07-15 18:15:08,692 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444908692"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444908692"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444908692"}]},"ts":"1689444908692"} 2023-07-15 18:15:08,693 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=40, state=RUNNABLE; CloseRegionProcedure f93e28113e0db3bc7de259bc766d8660, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:08,694 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=39, state=RUNNABLE; CloseRegionProcedure e7dcb99f03f5499042992813a6c11816, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:08,695 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=41, state=RUNNABLE; CloseRegionProcedure 36c6e89b8a6dbcb15b716f1027b1d05f, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:08,696 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure 687d0ffc1c561b5c571b9ae6cf917b74, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:08,698 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure bf204421b886b079a47a6c35915b7fa0, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:08,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-15 18:15:08,846 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f93e28113e0db3bc7de259bc766d8660, disabling compactions & flushes 2023-07-15 18:15:08,847 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:08,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:08,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. after waiting 0 ms 2023-07-15 18:15:08,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:08,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e7dcb99f03f5499042992813a6c11816, disabling compactions & flushes 2023-07-15 18:15:08,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:08,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:08,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. after waiting 0 ms 2023-07-15 18:15:08,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:08,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:08,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:08,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660. 2023-07-15 18:15:08,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f93e28113e0db3bc7de259bc766d8660: 2023-07-15 18:15:08,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816. 2023-07-15 18:15:08,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e7dcb99f03f5499042992813a6c11816: 2023-07-15 18:15:08,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:08,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bf204421b886b079a47a6c35915b7fa0, disabling compactions & flushes 2023-07-15 18:15:08,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:08,859 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=f93e28113e0db3bc7de259bc766d8660, regionState=CLOSED 2023-07-15 18:15:08,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:08,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. after waiting 0 ms 2023-07-15 18:15:08,859 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444908859"}]},"ts":"1689444908859"} 2023-07-15 18:15:08,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:08,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:08,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 36c6e89b8a6dbcb15b716f1027b1d05f, disabling compactions & flushes 2023-07-15 18:15:08,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:08,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:08,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. after waiting 0 ms 2023-07-15 18:15:08,861 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=e7dcb99f03f5499042992813a6c11816, regionState=CLOSED 2023-07-15 18:15:08,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:08,861 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444908861"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444908861"}]},"ts":"1689444908861"} 2023-07-15 18:15:08,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:08,867 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=40 2023-07-15 18:15:08,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=40, state=SUCCESS; CloseRegionProcedure f93e28113e0db3bc7de259bc766d8660, server=jenkins-hbase4.apache.org,37155,1689444906062 in 169 msec 2023-07-15 18:15:08,869 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=39 2023-07-15 18:15:08,869 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=39, state=SUCCESS; CloseRegionProcedure e7dcb99f03f5499042992813a6c11816, server=jenkins-hbase4.apache.org,39889,1689444902165 in 170 msec 2023-07-15 18:15:08,869 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0. 2023-07-15 18:15:08,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bf204421b886b079a47a6c35915b7fa0: 2023-07-15 18:15:08,871 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f93e28113e0db3bc7de259bc766d8660, UNASSIGN in 181 msec 2023-07-15 18:15:08,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:08,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,874 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 687d0ffc1c561b5c571b9ae6cf917b74, disabling compactions & flushes 2023-07-15 18:15:08,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:08,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:08,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. after waiting 0 ms 2023-07-15 18:15:08,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:08,875 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e7dcb99f03f5499042992813a6c11816, UNASSIGN in 183 msec 2023-07-15 18:15:08,875 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=bf204421b886b079a47a6c35915b7fa0, regionState=CLOSED 2023-07-15 18:15:08,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:08,875 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444908875"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444908875"}]},"ts":"1689444908875"} 2023-07-15 18:15:08,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f. 2023-07-15 18:15:08,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 36c6e89b8a6dbcb15b716f1027b1d05f: 2023-07-15 18:15:08,878 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:08,879 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=36c6e89b8a6dbcb15b716f1027b1d05f, regionState=CLOSED 2023-07-15 18:15:08,879 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908879"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444908879"}]},"ts":"1689444908879"} 2023-07-15 18:15:08,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:08,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74. 2023-07-15 18:15:08,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 687d0ffc1c561b5c571b9ae6cf917b74: 2023-07-15 18:15:08,881 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-15 18:15:08,881 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure bf204421b886b079a47a6c35915b7fa0, server=jenkins-hbase4.apache.org,37155,1689444906062 in 179 msec 2023-07-15 18:15:08,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:08,883 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bf204421b886b079a47a6c35915b7fa0, UNASSIGN in 195 msec 2023-07-15 18:15:08,884 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=687d0ffc1c561b5c571b9ae6cf917b74, regionState=CLOSED 2023-07-15 18:15:08,884 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444908884"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444908884"}]},"ts":"1689444908884"} 2023-07-15 18:15:08,887 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=41 2023-07-15 18:15:08,887 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; CloseRegionProcedure 36c6e89b8a6dbcb15b716f1027b1d05f, server=jenkins-hbase4.apache.org,39889,1689444902165 in 187 msec 2023-07-15 18:15:08,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=36c6e89b8a6dbcb15b716f1027b1d05f, UNASSIGN in 201 msec 2023-07-15 18:15:08,892 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-15 18:15:08,892 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure 687d0ffc1c561b5c571b9ae6cf917b74, server=jenkins-hbase4.apache.org,37155,1689444906062 in 192 msec 2023-07-15 18:15:08,896 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=38 2023-07-15 18:15:08,896 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=687d0ffc1c561b5c571b9ae6cf917b74, UNASSIGN in 206 msec 2023-07-15 18:15:08,897 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444908897"}]},"ts":"1689444908897"} 2023-07-15 18:15:08,899 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-15 18:15:08,901 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-15 18:15:08,904 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 232 msec 2023-07-15 18:15:08,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-15 18:15:08,986 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-15 18:15:08,987 INFO [Listener at localhost/40085] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:08,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:09,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-15 18:15:09,003 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-15 18:15:09,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-15 18:15:09,017 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:09,017 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:09,017 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:09,017 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:09,017 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:09,022 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/recovered.edits] 2023-07-15 18:15:09,022 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/recovered.edits] 2023-07-15 18:15:09,022 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/recovered.edits] 2023-07-15 18:15:09,022 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/recovered.edits] 2023-07-15 18:15:09,023 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/recovered.edits] 2023-07-15 18:15:09,037 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/recovered.edits/7.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0/recovered.edits/7.seqid 2023-07-15 18:15:09,037 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/recovered.edits/7.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660/recovered.edits/7.seqid 2023-07-15 18:15:09,037 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/recovered.edits/7.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f/recovered.edits/7.seqid 2023-07-15 18:15:09,038 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/recovered.edits/7.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74/recovered.edits/7.seqid 2023-07-15 18:15:09,038 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bf204421b886b079a47a6c35915b7fa0 2023-07-15 18:15:09,038 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f93e28113e0db3bc7de259bc766d8660 2023-07-15 18:15:09,039 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/36c6e89b8a6dbcb15b716f1027b1d05f 2023-07-15 18:15:09,039 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/687d0ffc1c561b5c571b9ae6cf917b74 2023-07-15 18:15:09,044 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/recovered.edits/7.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816/recovered.edits/7.seqid 2023-07-15 18:15:09,045 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e7dcb99f03f5499042992813a6c11816 2023-07-15 18:15:09,045 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-15 18:15:09,075 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-15 18:15:09,080 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-15 18:15:09,081 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-15 18:15:09,081 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444909081"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:09,081 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444909081"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:09,081 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444909081"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:09,081 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444909081"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:09,081 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444909081"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:09,085 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-15 18:15:09,085 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e7dcb99f03f5499042992813a6c11816, NAME => 'Group_testTableMoveTruncateAndDrop,,1689444906383.e7dcb99f03f5499042992813a6c11816.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => f93e28113e0db3bc7de259bc766d8660, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689444906383.f93e28113e0db3bc7de259bc766d8660.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 36c6e89b8a6dbcb15b716f1027b1d05f, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444906383.36c6e89b8a6dbcb15b716f1027b1d05f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 687d0ffc1c561b5c571b9ae6cf917b74, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444906383.687d0ffc1c561b5c571b9ae6cf917b74.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => bf204421b886b079a47a6c35915b7fa0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689444906383.bf204421b886b079a47a6c35915b7fa0.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-15 18:15:09,085 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-15 18:15:09,085 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689444909085"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:09,088 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-15 18:15:09,096 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,096 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,096 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,096 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,096 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,097 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429 empty. 2023-07-15 18:15:09,097 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d empty. 2023-07-15 18:15:09,097 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423 empty. 2023-07-15 18:15:09,097 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3 empty. 2023-07-15 18:15:09,097 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6 empty. 2023-07-15 18:15:09,098 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,098 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,098 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,099 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,099 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,099 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-15 18:15:09,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-15 18:15:09,135 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:09,140 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 95537fa0239396a5de5e4f7591424423, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:09,149 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e21f997c85f702b543563628ae120429, NAME => 'Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:09,151 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f565d2b104d27f7021fc84238f9602f3, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:09,233 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,233 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,237 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 95537fa0239396a5de5e4f7591424423, disabling compactions & flushes 2023-07-15 18:15:09,237 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing e21f997c85f702b543563628ae120429, disabling compactions & flushes 2023-07-15 18:15:09,237 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:09,237 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:09,237 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. after waiting 0 ms 2023-07-15 18:15:09,237 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:09,237 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:09,237 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e21f997c85f702b543563628ae120429: 2023-07-15 18:15:09,238 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 50c3d0178fc30e5cd126f6214b60904d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:09,237 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:09,238 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:09,238 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. after waiting 0 ms 2023-07-15 18:15:09,238 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:09,238 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:09,238 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 95537fa0239396a5de5e4f7591424423: 2023-07-15 18:15:09,239 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7968fec623aceadbc5d5507df7291db6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:09,255 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,255 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f565d2b104d27f7021fc84238f9602f3, disabling compactions & flushes 2023-07-15 18:15:09,255 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:09,255 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:09,255 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. after waiting 0 ms 2023-07-15 18:15:09,255 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:09,255 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:09,255 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f565d2b104d27f7021fc84238f9602f3: 2023-07-15 18:15:09,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-15 18:15:09,319 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,319 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 50c3d0178fc30e5cd126f6214b60904d, disabling compactions & flushes 2023-07-15 18:15:09,319 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:09,319 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:09,319 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. after waiting 0 ms 2023-07-15 18:15:09,319 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:09,319 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:09,319 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 50c3d0178fc30e5cd126f6214b60904d: 2023-07-15 18:15:09,320 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,320 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 7968fec623aceadbc5d5507df7291db6, disabling compactions & flushes 2023-07-15 18:15:09,320 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:09,320 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:09,320 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. after waiting 0 ms 2023-07-15 18:15:09,320 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:09,320 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:09,320 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 7968fec623aceadbc5d5507df7291db6: 2023-07-15 18:15:09,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444909325"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444909325"}]},"ts":"1689444909325"} 2023-07-15 18:15:09,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909325"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444909325"}]},"ts":"1689444909325"} 2023-07-15 18:15:09,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909325"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444909325"}]},"ts":"1689444909325"} 2023-07-15 18:15:09,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909325"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444909325"}]},"ts":"1689444909325"} 2023-07-15 18:15:09,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444909325"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444909325"}]},"ts":"1689444909325"} 2023-07-15 18:15:09,329 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-15 18:15:09,331 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444909331"}]},"ts":"1689444909331"} 2023-07-15 18:15:09,332 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-15 18:15:09,337 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:09,337 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:09,337 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:09,337 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:09,340 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e21f997c85f702b543563628ae120429, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95537fa0239396a5de5e4f7591424423, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f565d2b104d27f7021fc84238f9602f3, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50c3d0178fc30e5cd126f6214b60904d, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7968fec623aceadbc5d5507df7291db6, ASSIGN}] 2023-07-15 18:15:09,341 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50c3d0178fc30e5cd126f6214b60904d, ASSIGN 2023-07-15 18:15:09,342 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95537fa0239396a5de5e4f7591424423, ASSIGN 2023-07-15 18:15:09,342 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e21f997c85f702b543563628ae120429, ASSIGN 2023-07-15 18:15:09,343 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f565d2b104d27f7021fc84238f9602f3, ASSIGN 2023-07-15 18:15:09,343 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7968fec623aceadbc5d5507df7291db6, ASSIGN 2023-07-15 18:15:09,343 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50c3d0178fc30e5cd126f6214b60904d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37155,1689444906062; forceNewPlan=false, retain=false 2023-07-15 18:15:09,344 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e21f997c85f702b543563628ae120429, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39889,1689444902165; forceNewPlan=false, retain=false 2023-07-15 18:15:09,344 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95537fa0239396a5de5e4f7591424423, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37155,1689444906062; forceNewPlan=false, retain=false 2023-07-15 18:15:09,344 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f565d2b104d27f7021fc84238f9602f3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39889,1689444902165; forceNewPlan=false, retain=false 2023-07-15 18:15:09,344 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7968fec623aceadbc5d5507df7291db6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39889,1689444902165; forceNewPlan=false, retain=false 2023-07-15 18:15:09,494 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-15 18:15:09,497 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=f565d2b104d27f7021fc84238f9602f3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:09,497 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=95537fa0239396a5de5e4f7591424423, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:09,497 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=e21f997c85f702b543563628ae120429, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:09,497 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909497"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444909497"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444909497"}]},"ts":"1689444909497"} 2023-07-15 18:15:09,497 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444909497"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444909497"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444909497"}]},"ts":"1689444909497"} 2023-07-15 18:15:09,497 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=50c3d0178fc30e5cd126f6214b60904d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:09,497 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=7968fec623aceadbc5d5507df7291db6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:09,497 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909497"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444909497"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444909497"}]},"ts":"1689444909497"} 2023-07-15 18:15:09,498 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909497"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444909497"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444909497"}]},"ts":"1689444909497"} 2023-07-15 18:15:09,498 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444909497"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444909497"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444909497"}]},"ts":"1689444909497"} 2023-07-15 18:15:09,500 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=50, state=RUNNABLE; OpenRegionProcedure e21f997c85f702b543563628ae120429, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:09,501 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=52, state=RUNNABLE; OpenRegionProcedure f565d2b104d27f7021fc84238f9602f3, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:09,502 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=51, state=RUNNABLE; OpenRegionProcedure 95537fa0239396a5de5e4f7591424423, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:09,504 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=53, state=RUNNABLE; OpenRegionProcedure 50c3d0178fc30e5cd126f6214b60904d, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:09,505 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; OpenRegionProcedure 7968fec623aceadbc5d5507df7291db6, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:09,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-15 18:15:09,658 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:09,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7968fec623aceadbc5d5507df7291db6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-15 18:15:09,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,658 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,660 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:09,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 50c3d0178fc30e5cd126f6214b60904d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-15 18:15:09,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,660 INFO [StoreOpener-7968fec623aceadbc5d5507df7291db6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,662 INFO [StoreOpener-50c3d0178fc30e5cd126f6214b60904d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,663 DEBUG [StoreOpener-7968fec623aceadbc5d5507df7291db6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6/f 2023-07-15 18:15:09,663 DEBUG [StoreOpener-7968fec623aceadbc5d5507df7291db6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6/f 2023-07-15 18:15:09,664 INFO [StoreOpener-7968fec623aceadbc5d5507df7291db6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7968fec623aceadbc5d5507df7291db6 columnFamilyName f 2023-07-15 18:15:09,665 DEBUG [StoreOpener-50c3d0178fc30e5cd126f6214b60904d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d/f 2023-07-15 18:15:09,665 DEBUG [StoreOpener-50c3d0178fc30e5cd126f6214b60904d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d/f 2023-07-15 18:15:09,665 INFO [StoreOpener-7968fec623aceadbc5d5507df7291db6-1] regionserver.HStore(310): Store=7968fec623aceadbc5d5507df7291db6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:09,665 INFO [StoreOpener-50c3d0178fc30e5cd126f6214b60904d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 50c3d0178fc30e5cd126f6214b60904d columnFamilyName f 2023-07-15 18:15:09,666 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,666 INFO [StoreOpener-50c3d0178fc30e5cd126f6214b60904d-1] regionserver.HStore(310): Store=50c3d0178fc30e5cd126f6214b60904d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:09,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:09,673 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:09,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:09,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7968fec623aceadbc5d5507df7291db6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9913008800, jitterRate=-0.07677911221981049}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:09,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7968fec623aceadbc5d5507df7291db6: 2023-07-15 18:15:09,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6., pid=59, masterSystemTime=1689444909652 2023-07-15 18:15:09,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:09,681 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:09,681 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:09,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f565d2b104d27f7021fc84238f9602f3, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-15 18:15:09,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,682 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=7968fec623aceadbc5d5507df7291db6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:09,682 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444909682"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444909682"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444909682"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444909682"}]},"ts":"1689444909682"} 2023-07-15 18:15:09,688 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-15 18:15:09,688 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; OpenRegionProcedure 7968fec623aceadbc5d5507df7291db6, server=jenkins-hbase4.apache.org,39889,1689444902165 in 180 msec 2023-07-15 18:15:09,690 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7968fec623aceadbc5d5507df7291db6, ASSIGN in 348 msec 2023-07-15 18:15:09,691 INFO [StoreOpener-f565d2b104d27f7021fc84238f9602f3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:09,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 50c3d0178fc30e5cd126f6214b60904d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11687266240, jitterRate=0.0884614884853363}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:09,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 50c3d0178fc30e5cd126f6214b60904d: 2023-07-15 18:15:09,694 DEBUG [StoreOpener-f565d2b104d27f7021fc84238f9602f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3/f 2023-07-15 18:15:09,694 DEBUG [StoreOpener-f565d2b104d27f7021fc84238f9602f3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3/f 2023-07-15 18:15:09,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d., pid=58, masterSystemTime=1689444909655 2023-07-15 18:15:09,694 INFO [StoreOpener-f565d2b104d27f7021fc84238f9602f3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f565d2b104d27f7021fc84238f9602f3 columnFamilyName f 2023-07-15 18:15:09,695 INFO [StoreOpener-f565d2b104d27f7021fc84238f9602f3-1] regionserver.HStore(310): Store=f565d2b104d27f7021fc84238f9602f3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:09,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:09,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:09,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:09,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 95537fa0239396a5de5e4f7591424423, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-15 18:15:09,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,697 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=50c3d0178fc30e5cd126f6214b60904d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:09,698 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909697"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444909697"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444909697"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444909697"}]},"ts":"1689444909697"} 2023-07-15 18:15:09,700 INFO [StoreOpener-95537fa0239396a5de5e4f7591424423-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,702 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:09,703 DEBUG [StoreOpener-95537fa0239396a5de5e4f7591424423-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423/f 2023-07-15 18:15:09,704 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=53 2023-07-15 18:15:09,704 DEBUG [StoreOpener-95537fa0239396a5de5e4f7591424423-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423/f 2023-07-15 18:15:09,704 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=53, state=SUCCESS; OpenRegionProcedure 50c3d0178fc30e5cd126f6214b60904d, server=jenkins-hbase4.apache.org,37155,1689444906062 in 196 msec 2023-07-15 18:15:09,705 INFO [StoreOpener-95537fa0239396a5de5e4f7591424423-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 95537fa0239396a5de5e4f7591424423 columnFamilyName f 2023-07-15 18:15:09,706 INFO [StoreOpener-95537fa0239396a5de5e4f7591424423-1] regionserver.HStore(310): Store=95537fa0239396a5de5e4f7591424423/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:09,706 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50c3d0178fc30e5cd126f6214b60904d, ASSIGN in 364 msec 2023-07-15 18:15:09,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:09,707 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f565d2b104d27f7021fc84238f9602f3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10936772160, jitterRate=0.018566280603408813}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:09,707 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f565d2b104d27f7021fc84238f9602f3: 2023-07-15 18:15:09,707 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,708 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3., pid=56, masterSystemTime=1689444909652 2023-07-15 18:15:09,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:09,710 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:09,710 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:09,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e21f997c85f702b543563628ae120429, NAME => 'Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-15 18:15:09,712 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=f565d2b104d27f7021fc84238f9602f3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:09,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,712 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909712"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444909712"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444909712"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444909712"}]},"ts":"1689444909712"} 2023-07-15 18:15:09,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:09,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:09,715 INFO [StoreOpener-e21f997c85f702b543563628ae120429-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,717 DEBUG [StoreOpener-e21f997c85f702b543563628ae120429-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429/f 2023-07-15 18:15:09,717 DEBUG [StoreOpener-e21f997c85f702b543563628ae120429-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429/f 2023-07-15 18:15:09,718 INFO [StoreOpener-e21f997c85f702b543563628ae120429-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e21f997c85f702b543563628ae120429 columnFamilyName f 2023-07-15 18:15:09,718 INFO [StoreOpener-e21f997c85f702b543563628ae120429-1] regionserver.HStore(310): Store=e21f997c85f702b543563628ae120429/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:09,719 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=52 2023-07-15 18:15:09,720 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; OpenRegionProcedure f565d2b104d27f7021fc84238f9602f3, server=jenkins-hbase4.apache.org,39889,1689444902165 in 215 msec 2023-07-15 18:15:09,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:09,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 95537fa0239396a5de5e4f7591424423; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9465287520, jitterRate=-0.11847640573978424}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:09,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 95537fa0239396a5de5e4f7591424423: 2023-07-15 18:15:09,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,722 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423., pid=57, masterSystemTime=1689444909655 2023-07-15 18:15:09,722 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f565d2b104d27f7021fc84238f9602f3, ASSIGN in 379 msec 2023-07-15 18:15:09,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:09,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:09,724 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=95537fa0239396a5de5e4f7591424423, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:09,724 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444909724"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444909724"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444909724"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444909724"}]},"ts":"1689444909724"} 2023-07-15 18:15:09,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e21f997c85f702b543563628ae120429 2023-07-15 18:15:09,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:09,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e21f997c85f702b543563628ae120429; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10553860800, jitterRate=-0.017095118761062622}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:09,728 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=51 2023-07-15 18:15:09,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e21f997c85f702b543563628ae120429: 2023-07-15 18:15:09,728 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=51, state=SUCCESS; OpenRegionProcedure 95537fa0239396a5de5e4f7591424423, server=jenkins-hbase4.apache.org,37155,1689444906062 in 224 msec 2023-07-15 18:15:09,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429., pid=55, masterSystemTime=1689444909652 2023-07-15 18:15:09,731 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95537fa0239396a5de5e4f7591424423, ASSIGN in 391 msec 2023-07-15 18:15:09,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:09,732 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:09,734 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=e21f997c85f702b543563628ae120429, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:09,734 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444909734"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444909734"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444909734"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444909734"}]},"ts":"1689444909734"} 2023-07-15 18:15:09,740 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=50 2023-07-15 18:15:09,741 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=50, state=SUCCESS; OpenRegionProcedure e21f997c85f702b543563628ae120429, server=jenkins-hbase4.apache.org,39889,1689444902165 in 236 msec 2023-07-15 18:15:09,744 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=49 2023-07-15 18:15:09,744 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e21f997c85f702b543563628ae120429, ASSIGN in 404 msec 2023-07-15 18:15:09,744 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444909744"}]},"ts":"1689444909744"} 2023-07-15 18:15:09,746 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-15 18:15:09,748 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-15 18:15:09,752 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 754 msec 2023-07-15 18:15:10,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-15 18:15:10,118 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-15 18:15:10,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:10,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:10,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:10,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:10,122 INFO [Listener at localhost/40085] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-15 18:15:10,128 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444910128"}]},"ts":"1689444910128"} 2023-07-15 18:15:10,130 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-15 18:15:10,132 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-15 18:15:10,133 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e21f997c85f702b543563628ae120429, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95537fa0239396a5de5e4f7591424423, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f565d2b104d27f7021fc84238f9602f3, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50c3d0178fc30e5cd126f6214b60904d, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7968fec623aceadbc5d5507df7291db6, UNASSIGN}] 2023-07-15 18:15:10,135 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95537fa0239396a5de5e4f7591424423, UNASSIGN 2023-07-15 18:15:10,135 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e21f997c85f702b543563628ae120429, UNASSIGN 2023-07-15 18:15:10,135 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f565d2b104d27f7021fc84238f9602f3, UNASSIGN 2023-07-15 18:15:10,135 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50c3d0178fc30e5cd126f6214b60904d, UNASSIGN 2023-07-15 18:15:10,136 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7968fec623aceadbc5d5507df7291db6, UNASSIGN 2023-07-15 18:15:10,136 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=95537fa0239396a5de5e4f7591424423, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:10,137 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444910136"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444910136"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444910136"}]},"ts":"1689444910136"} 2023-07-15 18:15:10,137 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=50c3d0178fc30e5cd126f6214b60904d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:10,137 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444910137"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444910137"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444910137"}]},"ts":"1689444910137"} 2023-07-15 18:15:10,138 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=e21f997c85f702b543563628ae120429, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:10,138 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444910138"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444910138"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444910138"}]},"ts":"1689444910138"} 2023-07-15 18:15:10,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=62, state=RUNNABLE; CloseRegionProcedure 95537fa0239396a5de5e4f7591424423, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:10,140 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=64, state=RUNNABLE; CloseRegionProcedure 50c3d0178fc30e5cd126f6214b60904d, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:10,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=61, state=RUNNABLE; CloseRegionProcedure e21f997c85f702b543563628ae120429, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:10,142 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=f565d2b104d27f7021fc84238f9602f3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:10,142 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444910142"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444910142"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444910142"}]},"ts":"1689444910142"} 2023-07-15 18:15:10,144 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=7968fec623aceadbc5d5507df7291db6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:10,144 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444910144"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444910144"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444910144"}]},"ts":"1689444910144"} 2023-07-15 18:15:10,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=63, state=RUNNABLE; CloseRegionProcedure f565d2b104d27f7021fc84238f9602f3, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:10,148 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=65, state=RUNNABLE; CloseRegionProcedure 7968fec623aceadbc5d5507df7291db6, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:10,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-15 18:15:10,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:10,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 95537fa0239396a5de5e4f7591424423, disabling compactions & flushes 2023-07-15 18:15:10,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:10,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:10,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. after waiting 0 ms 2023-07-15 18:15:10,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:10,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:10,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7968fec623aceadbc5d5507df7291db6, disabling compactions & flushes 2023-07-15 18:15:10,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:10,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:10,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. after waiting 0 ms 2023-07-15 18:15:10,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:10,306 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:10,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423. 2023-07-15 18:15:10,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 95537fa0239396a5de5e4f7591424423: 2023-07-15 18:15:10,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:10,309 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6. 2023-07-15 18:15:10,310 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7968fec623aceadbc5d5507df7291db6: 2023-07-15 18:15:10,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:10,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:10,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 50c3d0178fc30e5cd126f6214b60904d, disabling compactions & flushes 2023-07-15 18:15:10,311 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:10,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:10,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. after waiting 0 ms 2023-07-15 18:15:10,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:10,312 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=95537fa0239396a5de5e4f7591424423, regionState=CLOSED 2023-07-15 18:15:10,312 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444910312"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444910312"}]},"ts":"1689444910312"} 2023-07-15 18:15:10,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:10,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:10,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f565d2b104d27f7021fc84238f9602f3, disabling compactions & flushes 2023-07-15 18:15:10,313 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:10,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:10,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. after waiting 0 ms 2023-07-15 18:15:10,313 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:10,315 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=7968fec623aceadbc5d5507df7291db6, regionState=CLOSED 2023-07-15 18:15:10,315 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444910315"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444910315"}]},"ts":"1689444910315"} 2023-07-15 18:15:10,320 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:10,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d. 2023-07-15 18:15:10,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 50c3d0178fc30e5cd126f6214b60904d: 2023-07-15 18:15:10,323 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=62 2023-07-15 18:15:10,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:10,323 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=62, state=SUCCESS; CloseRegionProcedure 95537fa0239396a5de5e4f7591424423, server=jenkins-hbase4.apache.org,37155,1689444906062 in 179 msec 2023-07-15 18:15:10,326 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3. 2023-07-15 18:15:10,326 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:10,326 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f565d2b104d27f7021fc84238f9602f3: 2023-07-15 18:15:10,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=65 2023-07-15 18:15:10,327 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=50c3d0178fc30e5cd126f6214b60904d, regionState=CLOSED 2023-07-15 18:15:10,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=65, state=SUCCESS; CloseRegionProcedure 7968fec623aceadbc5d5507df7291db6, server=jenkins-hbase4.apache.org,39889,1689444902165 in 173 msec 2023-07-15 18:15:10,327 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444910327"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444910327"}]},"ts":"1689444910327"} 2023-07-15 18:15:10,327 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95537fa0239396a5de5e4f7591424423, UNASSIGN in 190 msec 2023-07-15 18:15:10,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:10,329 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e21f997c85f702b543563628ae120429 2023-07-15 18:15:10,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e21f997c85f702b543563628ae120429, disabling compactions & flushes 2023-07-15 18:15:10,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:10,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:10,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. after waiting 0 ms 2023-07-15 18:15:10,330 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:10,331 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=f565d2b104d27f7021fc84238f9602f3, regionState=CLOSED 2023-07-15 18:15:10,331 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689444910331"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444910331"}]},"ts":"1689444910331"} 2023-07-15 18:15:10,331 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7968fec623aceadbc5d5507df7291db6, UNASSIGN in 194 msec 2023-07-15 18:15:10,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=64 2023-07-15 18:15:10,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=64, state=SUCCESS; CloseRegionProcedure 50c3d0178fc30e5cd126f6214b60904d, server=jenkins-hbase4.apache.org,37155,1689444906062 in 191 msec 2023-07-15 18:15:10,337 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50c3d0178fc30e5cd126f6214b60904d, UNASSIGN in 201 msec 2023-07-15 18:15:10,337 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=63 2023-07-15 18:15:10,337 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=63, state=SUCCESS; CloseRegionProcedure f565d2b104d27f7021fc84238f9602f3, server=jenkins-hbase4.apache.org,39889,1689444902165 in 190 msec 2023-07-15 18:15:10,338 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f565d2b104d27f7021fc84238f9602f3, UNASSIGN in 204 msec 2023-07-15 18:15:10,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:10,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429. 2023-07-15 18:15:10,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e21f997c85f702b543563628ae120429: 2023-07-15 18:15:10,352 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-15 18:15:10,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e21f997c85f702b543563628ae120429 2023-07-15 18:15:10,355 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=e21f997c85f702b543563628ae120429, regionState=CLOSED 2023-07-15 18:15:10,355 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689444910355"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444910355"}]},"ts":"1689444910355"} 2023-07-15 18:15:10,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=61 2023-07-15 18:15:10,363 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=61, state=SUCCESS; CloseRegionProcedure e21f997c85f702b543563628ae120429, server=jenkins-hbase4.apache.org,39889,1689444902165 in 216 msec 2023-07-15 18:15:10,366 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-15 18:15:10,366 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e21f997c85f702b543563628ae120429, UNASSIGN in 231 msec 2023-07-15 18:15:10,367 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444910367"}]},"ts":"1689444910367"} 2023-07-15 18:15:10,370 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-15 18:15:10,372 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-15 18:15:10,378 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 250 msec 2023-07-15 18:15:10,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-15 18:15:10,431 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-15 18:15:10,436 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-15 18:15:10,438 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-15 18:15:10,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,439 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-15 18:15:10,441 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 18:15:10,441 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-15 18:15:10,442 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 18:15:10,442 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-15 18:15:10,442 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 18:15:10,442 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-15 18:15:10,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,452 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1729975212' 2023-07-15 18:15:10,453 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:10,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:10,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-15 18:15:10,472 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429 2023-07-15 18:15:10,472 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:10,472 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:10,472 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:10,472 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:10,477 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429/recovered.edits] 2023-07-15 18:15:10,478 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d/recovered.edits] 2023-07-15 18:15:10,478 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6/recovered.edits] 2023-07-15 18:15:10,478 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3/recovered.edits] 2023-07-15 18:15:10,481 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423/recovered.edits] 2023-07-15 18:15:10,493 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429/recovered.edits/4.seqid 2023-07-15 18:15:10,494 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3/recovered.edits/4.seqid 2023-07-15 18:15:10,495 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e21f997c85f702b543563628ae120429 2023-07-15 18:15:10,495 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423/recovered.edits/4.seqid 2023-07-15 18:15:10,495 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6/recovered.edits/4.seqid 2023-07-15 18:15:10,495 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d/recovered.edits/4.seqid 2023-07-15 18:15:10,495 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f565d2b104d27f7021fc84238f9602f3 2023-07-15 18:15:10,496 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95537fa0239396a5de5e4f7591424423 2023-07-15 18:15:10,497 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50c3d0178fc30e5cd126f6214b60904d 2023-07-15 18:15:10,497 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7968fec623aceadbc5d5507df7291db6 2023-07-15 18:15:10,497 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-15 18:15:10,500 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,507 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-15 18:15:10,510 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-15 18:15:10,512 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,512 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-15 18:15:10,512 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444910512"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:10,512 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444910512"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:10,512 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444910512"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:10,513 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444910512"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:10,513 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444910512"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:10,515 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-15 18:15:10,515 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e21f997c85f702b543563628ae120429, NAME => 'Group_testTableMoveTruncateAndDrop,,1689444909047.e21f997c85f702b543563628ae120429.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 95537fa0239396a5de5e4f7591424423, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689444909047.95537fa0239396a5de5e4f7591424423.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f565d2b104d27f7021fc84238f9602f3, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689444909047.f565d2b104d27f7021fc84238f9602f3.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 50c3d0178fc30e5cd126f6214b60904d, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689444909047.50c3d0178fc30e5cd126f6214b60904d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 7968fec623aceadbc5d5507df7291db6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689444909047.7968fec623aceadbc5d5507df7291db6.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-15 18:15:10,516 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-15 18:15:10,516 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689444910516"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:10,518 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-15 18:15:10,521 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-15 18:15:10,523 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 81 msec 2023-07-15 18:15:10,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-15 18:15:10,574 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-15 18:15:10,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:10,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:10,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,590 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:10,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:10,590 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:10,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889] to rsgroup default 2023-07-15 18:15:10,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:10,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:10,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1729975212, current retry=0 2023-07-15 18:15:10,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165] are moved back to Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:10,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1729975212 => default 2023-07-15 18:15:10,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:10,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1729975212 2023-07-15 18:15:10,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 18:15:10,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:10,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:10,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:10,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:10,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:10,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:10,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:10,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:10,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:10,635 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:10,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:10,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:10,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:10,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:10,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:10,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446110651, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:10,653 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:10,655 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:10,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,657 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:10,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:10,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:10,694 INFO [Listener at localhost/40085] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=490 (was 418) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37155Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-148758507_17 at /127.0.0.1:40728 [Receiving block BP-670626647-172.31.14.131-1689444896326:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp489259809-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-670626647-172.31.14.131-1689444896326:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54099@0x7095516b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-283860936_17 at /127.0.0.1:40464 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955-prefix:jenkins-hbase4.apache.org,37155,1689444906062 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp489259809-632 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-670626647-172.31.14.131-1689444896326:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:37155-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp489259809-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp489259809-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-670626647-172.31.14.131-1689444896326:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp489259809-631-acceptor-0@5d9296e9-ServerConnector@57fbd536{HTTP/1.1, (http/1.1)}{0.0.0.0:43863} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp489259809-630 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54099@0x7095516b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-148758507_17 at /127.0.0.1:40770 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:44585 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5009bbb6-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54099@0x7095516b-SendThread(127.0.0.1:54099) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-148758507_17 at /127.0.0.1:34236 [Receiving block BP-670626647-172.31.14.131-1689444896326:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:44585 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:3;jenkins-hbase4:37155 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-148758507_17 at /127.0.0.1:40402 [Receiving block BP-670626647-172.31.14.131-1689444896326:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp489259809-634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp489259809-633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=768 (was 691) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=469 (was 422) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=3747 (was 4087) 2023-07-15 18:15:10,713 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=490, OpenFileDescriptor=768, MaxFileDescriptor=60000, SystemLoadAverage=469, ProcessCount=172, AvailableMemoryMB=3744 2023-07-15 18:15:10,713 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-15 18:15:10,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:10,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:10,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:10,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:10,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:10,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:10,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:10,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:10,736 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:10,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:10,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:10,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:10,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:10,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:10,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446110762, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:10,764 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:10,766 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:10,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,768 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:10,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:10,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:10,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-15 18:15:10,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:10,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:42212 deadline: 1689446110769, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-15 18:15:10,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-15 18:15:10,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:10,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:42212 deadline: 1689446110771, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-15 18:15:10,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-15 18:15:10,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:10,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:42212 deadline: 1689446110773, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-15 18:15:10,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-15 18:15:10,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-15 18:15:10,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:10,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:10,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:10,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:10,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:10,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:10,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:10,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-15 18:15:10,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 18:15:10,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:10,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:10,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:10,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:10,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:10,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:10,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:10,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:10,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:10,834 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:10,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:10,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:10,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:10,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:10,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:10,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446110856, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:10,857 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:10,859 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:10,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,861 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:10,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:10,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:10,886 INFO [Listener at localhost/40085] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=492 (was 490) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=768 (was 768), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=469 (was 469), ProcessCount=172 (was 172), AvailableMemoryMB=3739 (was 3744) 2023-07-15 18:15:10,906 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=492, OpenFileDescriptor=768, MaxFileDescriptor=60000, SystemLoadAverage=469, ProcessCount=172, AvailableMemoryMB=3738 2023-07-15 18:15:10,907 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-15 18:15:10,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:10,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:10,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:10,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:10,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:10,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:10,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:10,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:10,932 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:10,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:10,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:10,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:10,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:10,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:10,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446110947, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:10,948 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:10,950 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:10,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,951 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:10,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:10,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:10,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:10,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:10,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-15 18:15:10,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 18:15:10,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:10,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:10,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:10,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:10,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191] to rsgroup bar 2023-07-15 18:15:10,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:10,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 18:15:10,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:10,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:10,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(238): Moving server region 1c87ff5cd30bfdf1c603a34ec3bb14c0, which do not belong to RSGroup bar 2023-07-15 18:15:10,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=1c87ff5cd30bfdf1c603a34ec3bb14c0, REOPEN/MOVE 2023-07-15 18:15:10,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-15 18:15:10,979 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=1c87ff5cd30bfdf1c603a34ec3bb14c0, REOPEN/MOVE 2023-07-15 18:15:10,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-15 18:15:10,980 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=1c87ff5cd30bfdf1c603a34ec3bb14c0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:10,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-15 18:15:10,981 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-15 18:15:10,982 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40191,1689444902237, state=CLOSING 2023-07-15 18:15:10,983 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444910980"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444910980"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444910980"}]},"ts":"1689444910980"} 2023-07-15 18:15:10,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=72, state=RUNNABLE; CloseRegionProcedure 1c87ff5cd30bfdf1c603a34ec3bb14c0, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:10,985 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 18:15:10,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=73, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:10,985 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:10,988 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=74, ppid=72, state=RUNNABLE; CloseRegionProcedure 1c87ff5cd30bfdf1c603a34ec3bb14c0, server=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:11,142 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-15 18:15:11,143 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 18:15:11,143 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 18:15:11,143 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 18:15:11,143 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 18:15:11,143 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 18:15:11,144 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=40.45 KB heapSize=62.50 KB 2023-07-15 18:15:11,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.39 KB at sequenceid=91 (bloomFilter=false), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/info/898baf035bc44537a63a811ea84899de 2023-07-15 18:15:11,268 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 898baf035bc44537a63a811ea84899de 2023-07-15 18:15:11,305 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=91 (bloomFilter=false), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/rep_barrier/dbb2ab96f86e45d1bb030fc7087e7ef3 2023-07-15 18:15:11,313 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dbb2ab96f86e45d1bb030fc7087e7ef3 2023-07-15 18:15:11,332 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=91 (bloomFilter=false), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/table/b15a40fac619469ea47e27ea0938b3b3 2023-07-15 18:15:11,340 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b15a40fac619469ea47e27ea0938b3b3 2023-07-15 18:15:11,342 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/info/898baf035bc44537a63a811ea84899de as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info/898baf035bc44537a63a811ea84899de 2023-07-15 18:15:11,350 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 898baf035bc44537a63a811ea84899de 2023-07-15 18:15:11,350 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info/898baf035bc44537a63a811ea84899de, entries=41, sequenceid=91, filesize=9.6 K 2023-07-15 18:15:11,352 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/rep_barrier/dbb2ab96f86e45d1bb030fc7087e7ef3 as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier/dbb2ab96f86e45d1bb030fc7087e7ef3 2023-07-15 18:15:11,360 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dbb2ab96f86e45d1bb030fc7087e7ef3 2023-07-15 18:15:11,361 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier/dbb2ab96f86e45d1bb030fc7087e7ef3, entries=10, sequenceid=91, filesize=6.1 K 2023-07-15 18:15:11,362 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/table/b15a40fac619469ea47e27ea0938b3b3 as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table/b15a40fac619469ea47e27ea0938b3b3 2023-07-15 18:15:11,370 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b15a40fac619469ea47e27ea0938b3b3 2023-07-15 18:15:11,370 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table/b15a40fac619469ea47e27ea0938b3b3, entries=15, sequenceid=91, filesize=6.2 K 2023-07-15 18:15:11,371 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~40.45 KB/41425, heapSize ~62.45 KB/63952, currentSize=0 B/0 for 1588230740 in 227ms, sequenceid=91, compaction requested=false 2023-07-15 18:15:11,383 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/recovered.edits/94.seqid, newMaxSeqId=94, maxSeqId=1 2023-07-15 18:15:11,384 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 18:15:11,385 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:11,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 18:15:11,385 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44901,1689444902054 record at close sequenceid=91 2023-07-15 18:15:11,392 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-15 18:15:11,392 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-15 18:15:11,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=73 2023-07-15 18:15:11,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=73, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40191,1689444902237 in 407 msec 2023-07-15 18:15:11,395 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:11,545 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44901,1689444902054, state=OPENING 2023-07-15 18:15:11,548 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 18:15:11,548 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:11,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=73, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:11,707 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 18:15:11,707 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:11,709 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44901%2C1689444902054.meta, suffix=.meta, logDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,44901,1689444902054, archiveDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs, maxLogs=32 2023-07-15 18:15:11,731 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK] 2023-07-15 18:15:11,732 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK] 2023-07-15 18:15:11,738 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK] 2023-07-15 18:15:11,742 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/WALs/jenkins-hbase4.apache.org,44901,1689444902054/jenkins-hbase4.apache.org%2C44901%2C1689444902054.meta.1689444911711.meta 2023-07-15 18:15:11,742 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46049,DS-81d85367-4607-466b-a028-36462b1964fb,DISK], DatanodeInfoWithStorage[127.0.0.1:40573,DS-00566f2b-0518-49e6-9ca8-6db1edc7b717,DISK], DatanodeInfoWithStorage[127.0.0.1:37573,DS-dd97f431-6bfa-4c49-8c1b-aa1d26f1af62,DISK]] 2023-07-15 18:15:11,742 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:11,743 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 18:15:11,743 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 18:15:11,743 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 18:15:11,743 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 18:15:11,743 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:11,743 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 18:15:11,743 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 18:15:11,745 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 18:15:11,746 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info 2023-07-15 18:15:11,746 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info 2023-07-15 18:15:11,747 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 18:15:11,758 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 898baf035bc44537a63a811ea84899de 2023-07-15 18:15:11,758 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info/898baf035bc44537a63a811ea84899de 2023-07-15 18:15:11,759 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:11,759 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 18:15:11,761 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:11,761 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:11,761 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 18:15:11,773 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dbb2ab96f86e45d1bb030fc7087e7ef3 2023-07-15 18:15:11,773 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier/dbb2ab96f86e45d1bb030fc7087e7ef3 2023-07-15 18:15:11,773 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:11,773 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 18:15:11,775 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table 2023-07-15 18:15:11,775 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table 2023-07-15 18:15:11,776 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 18:15:11,784 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b15a40fac619469ea47e27ea0938b3b3 2023-07-15 18:15:11,785 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table/b15a40fac619469ea47e27ea0938b3b3 2023-07-15 18:15:11,785 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:11,786 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740 2023-07-15 18:15:11,788 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740 2023-07-15 18:15:11,790 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 18:15:11,792 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 18:15:11,793 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=95; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10479877120, jitterRate=-0.02398538589477539}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 18:15:11,793 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 18:15:11,799 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=76, masterSystemTime=1689444911701 2023-07-15 18:15:11,801 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 18:15:11,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 18:15:11,804 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44901,1689444902054, state=OPEN 2023-07-15 18:15:11,805 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 18:15:11,806 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:11,809 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=73 2023-07-15 18:15:11,809 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=73, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44901,1689444902054 in 257 msec 2023-07-15 18:15:11,811 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 831 msec 2023-07-15 18:15:11,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:11,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1c87ff5cd30bfdf1c603a34ec3bb14c0, disabling compactions & flushes 2023-07-15 18:15:11,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:11,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:11,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. after waiting 0 ms 2023-07-15 18:15:11,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:11,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1c87ff5cd30bfdf1c603a34ec3bb14c0 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-15 18:15:11,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-15 18:15:11,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/.tmp/info/9dfd98667cab42b5aecfa8fc84a1d8f5 2023-07-15 18:15:12,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/.tmp/info/9dfd98667cab42b5aecfa8fc84a1d8f5 as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/info/9dfd98667cab42b5aecfa8fc84a1d8f5 2023-07-15 18:15:12,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/info/9dfd98667cab42b5aecfa8fc84a1d8f5, entries=2, sequenceid=6, filesize=4.8 K 2023-07-15 18:15:12,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 1c87ff5cd30bfdf1c603a34ec3bb14c0 in 53ms, sequenceid=6, compaction requested=false 2023-07-15 18:15:12,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-15 18:15:12,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:12,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1c87ff5cd30bfdf1c603a34ec3bb14c0: 2023-07-15 18:15:12,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1c87ff5cd30bfdf1c603a34ec3bb14c0 move to jenkins-hbase4.apache.org,44901,1689444902054 record at close sequenceid=6 2023-07-15 18:15:12,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:12,025 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=1c87ff5cd30bfdf1c603a34ec3bb14c0, regionState=CLOSED 2023-07-15 18:15:12,025 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444912025"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444912025"}]},"ts":"1689444912025"} 2023-07-15 18:15:12,026 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40191] ipc.CallRunner(144): callId: 178 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:59412 deadline: 1689444972026, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44901 startCode=1689444902054. As of locationSeqNum=91. 2023-07-15 18:15:12,133 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=72 2023-07-15 18:15:12,133 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=72, state=SUCCESS; CloseRegionProcedure 1c87ff5cd30bfdf1c603a34ec3bb14c0, server=jenkins-hbase4.apache.org,40191,1689444902237 in 1.1450 sec 2023-07-15 18:15:12,133 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1c87ff5cd30bfdf1c603a34ec3bb14c0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:12,284 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=1c87ff5cd30bfdf1c603a34ec3bb14c0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:12,284 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444912284"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444912284"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444912284"}]},"ts":"1689444912284"} 2023-07-15 18:15:12,291 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=72, state=RUNNABLE; OpenRegionProcedure 1c87ff5cd30bfdf1c603a34ec3bb14c0, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:12,448 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:12,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1c87ff5cd30bfdf1c603a34ec3bb14c0, NAME => 'hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:12,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:12,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:12,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:12,449 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:12,451 INFO [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:12,452 DEBUG [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/info 2023-07-15 18:15:12,452 DEBUG [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/info 2023-07-15 18:15:12,453 INFO [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1c87ff5cd30bfdf1c603a34ec3bb14c0 columnFamilyName info 2023-07-15 18:15:12,463 DEBUG [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] regionserver.HStore(539): loaded hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/info/9dfd98667cab42b5aecfa8fc84a1d8f5 2023-07-15 18:15:12,463 INFO [StoreOpener-1c87ff5cd30bfdf1c603a34ec3bb14c0-1] regionserver.HStore(310): Store=1c87ff5cd30bfdf1c603a34ec3bb14c0/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:12,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:12,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:12,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:12,471 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1c87ff5cd30bfdf1c603a34ec3bb14c0; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9709849280, jitterRate=-0.09569981694221497}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:12,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1c87ff5cd30bfdf1c603a34ec3bb14c0: 2023-07-15 18:15:12,472 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0., pid=77, masterSystemTime=1689444912444 2023-07-15 18:15:12,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:12,475 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:12,476 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=1c87ff5cd30bfdf1c603a34ec3bb14c0, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:12,476 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444912476"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444912476"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444912476"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444912476"}]},"ts":"1689444912476"} 2023-07-15 18:15:12,482 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=72 2023-07-15 18:15:12,482 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=72, state=SUCCESS; OpenRegionProcedure 1c87ff5cd30bfdf1c603a34ec3bb14c0, server=jenkins-hbase4.apache.org,44901,1689444902054 in 191 msec 2023-07-15 18:15:12,483 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1c87ff5cd30bfdf1c603a34ec3bb14c0, REOPEN/MOVE in 1.5060 sec 2023-07-15 18:15:12,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165, jenkins-hbase4.apache.org,40191,1689444902237] are moved back to default 2023-07-15 18:15:12,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-15 18:15:12,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:12,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:12,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:12,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-15 18:15:12,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:12,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:12,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-15 18:15:12,995 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:12,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 78 2023-07-15 18:15:12,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-15 18:15:12,998 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:12,999 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 18:15:12,999 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:12,999 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:13,003 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:13,009 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,010 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 empty. 2023-07-15 18:15:13,011 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,011 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-15 18:15:13,034 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:13,036 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => bc69d7464a14a23dab69dbf0fc0d7d26, NAME => 'Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:13,051 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:13,051 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing bc69d7464a14a23dab69dbf0fc0d7d26, disabling compactions & flushes 2023-07-15 18:15:13,051 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,051 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,051 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. after waiting 0 ms 2023-07-15 18:15:13,051 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,051 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,051 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for bc69d7464a14a23dab69dbf0fc0d7d26: 2023-07-15 18:15:13,054 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:13,056 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444913055"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444913055"}]},"ts":"1689444913055"} 2023-07-15 18:15:13,057 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:13,058 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:13,059 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444913058"}]},"ts":"1689444913058"} 2023-07-15 18:15:13,060 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-15 18:15:13,068 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, ASSIGN}] 2023-07-15 18:15:13,070 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, ASSIGN 2023-07-15 18:15:13,071 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:13,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-15 18:15:13,222 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:13,223 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444913222"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444913222"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444913222"}]},"ts":"1689444913222"} 2023-07-15 18:15:13,225 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE; OpenRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:13,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-15 18:15:13,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bc69d7464a14a23dab69dbf0fc0d7d26, NAME => 'Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:13,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:13,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,385 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,386 DEBUG [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/f 2023-07-15 18:15:13,386 DEBUG [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/f 2023-07-15 18:15:13,387 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bc69d7464a14a23dab69dbf0fc0d7d26 columnFamilyName f 2023-07-15 18:15:13,388 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] regionserver.HStore(310): Store=bc69d7464a14a23dab69dbf0fc0d7d26/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:13,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:13,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bc69d7464a14a23dab69dbf0fc0d7d26; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10113032000, jitterRate=-0.05815050005912781}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:13,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bc69d7464a14a23dab69dbf0fc0d7d26: 2023-07-15 18:15:13,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26., pid=80, masterSystemTime=1689444913378 2023-07-15 18:15:13,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,399 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:13,399 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444913399"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444913399"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444913399"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444913399"}]},"ts":"1689444913399"} 2023-07-15 18:15:13,403 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-15 18:15:13,403 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; OpenRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,44901,1689444902054 in 176 msec 2023-07-15 18:15:13,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-15 18:15:13,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, ASSIGN in 335 msec 2023-07-15 18:15:13,406 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:13,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444913406"}]},"ts":"1689444913406"} 2023-07-15 18:15:13,407 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-15 18:15:13,411 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:13,412 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 419 msec 2023-07-15 18:15:13,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-15 18:15:13,602 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 78 completed 2023-07-15 18:15:13,602 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-15 18:15:13,602 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:13,603 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40191] ipc.CallRunner(144): callId: 275 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:59440 deadline: 1689444973603, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44901 startCode=1689444902054. As of locationSeqNum=91. 2023-07-15 18:15:13,706 DEBUG [hconnection-0x3c71af44-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:13,708 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34256, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:13,723 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-15 18:15:13,723 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:13,724 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-15 18:15:13,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-15 18:15:13,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:13,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 18:15:13,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:13,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:13,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-15 18:15:13,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region bc69d7464a14a23dab69dbf0fc0d7d26 to RSGroup bar 2023-07-15 18:15:13,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:13,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:13,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:13,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:13,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-15 18:15:13,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:13,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, REOPEN/MOVE 2023-07-15 18:15:13,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-15 18:15:13,738 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, REOPEN/MOVE 2023-07-15 18:15:13,739 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:13,739 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444913739"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444913739"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444913739"}]},"ts":"1689444913739"} 2023-07-15 18:15:13,743 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:13,898 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bc69d7464a14a23dab69dbf0fc0d7d26, disabling compactions & flushes 2023-07-15 18:15:13,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. after waiting 0 ms 2023-07-15 18:15:13,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:13,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:13,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bc69d7464a14a23dab69dbf0fc0d7d26: 2023-07-15 18:15:13,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bc69d7464a14a23dab69dbf0fc0d7d26 move to jenkins-hbase4.apache.org,37155,1689444906062 record at close sequenceid=2 2023-07-15 18:15:13,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:13,916 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=CLOSED 2023-07-15 18:15:13,916 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444913916"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444913916"}]},"ts":"1689444913916"} 2023-07-15 18:15:13,922 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-15 18:15:13,922 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,44901,1689444902054 in 176 msec 2023-07-15 18:15:13,923 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37155,1689444906062; forceNewPlan=false, retain=false 2023-07-15 18:15:14,073 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:14,073 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:14,074 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444914073"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444914073"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444914073"}]},"ts":"1689444914073"} 2023-07-15 18:15:14,076 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:14,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:14,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bc69d7464a14a23dab69dbf0fc0d7d26, NAME => 'Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:14,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:14,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,234 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,236 DEBUG [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/f 2023-07-15 18:15:14,236 DEBUG [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/f 2023-07-15 18:15:14,236 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bc69d7464a14a23dab69dbf0fc0d7d26 columnFamilyName f 2023-07-15 18:15:14,237 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] regionserver.HStore(310): Store=bc69d7464a14a23dab69dbf0fc0d7d26/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:14,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,245 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bc69d7464a14a23dab69dbf0fc0d7d26; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11712635520, jitterRate=0.0908241868019104}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:14,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bc69d7464a14a23dab69dbf0fc0d7d26: 2023-07-15 18:15:14,245 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26., pid=83, masterSystemTime=1689444914228 2023-07-15 18:15:14,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:14,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:14,248 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:14,248 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444914248"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444914248"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444914248"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444914248"}]},"ts":"1689444914248"} 2023-07-15 18:15:14,252 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-15 18:15:14,252 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,37155,1689444906062 in 174 msec 2023-07-15 18:15:14,253 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, REOPEN/MOVE in 517 msec 2023-07-15 18:15:14,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-15 18:15:14,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-15 18:15:14,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:14,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:14,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:14,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-15 18:15:14,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:14,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-15 18:15:14,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:14,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:42212 deadline: 1689446114751, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-15 18:15:14,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191] to rsgroup default 2023-07-15 18:15:14,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:14,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:42212 deadline: 1689446114753, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-15 18:15:14,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-15 18:15:14,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:14,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 18:15:14,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:14,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:14,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-15 18:15:14,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region bc69d7464a14a23dab69dbf0fc0d7d26 to RSGroup default 2023-07-15 18:15:14,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, REOPEN/MOVE 2023-07-15 18:15:14,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-15 18:15:14,767 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, REOPEN/MOVE 2023-07-15 18:15:14,768 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:14,768 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444914768"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444914768"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444914768"}]},"ts":"1689444914768"} 2023-07-15 18:15:14,770 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:14,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bc69d7464a14a23dab69dbf0fc0d7d26, disabling compactions & flushes 2023-07-15 18:15:14,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:14,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:14,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. after waiting 0 ms 2023-07-15 18:15:14,925 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:14,932 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:14,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:14,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bc69d7464a14a23dab69dbf0fc0d7d26: 2023-07-15 18:15:14,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bc69d7464a14a23dab69dbf0fc0d7d26 move to jenkins-hbase4.apache.org,44901,1689444902054 record at close sequenceid=5 2023-07-15 18:15:14,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:14,936 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=CLOSED 2023-07-15 18:15:14,937 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444914936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444914936"}]},"ts":"1689444914936"} 2023-07-15 18:15:14,941 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-15 18:15:14,941 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,37155,1689444906062 in 169 msec 2023-07-15 18:15:14,943 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:15,093 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:15,094 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444915093"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444915093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444915093"}]},"ts":"1689444915093"} 2023-07-15 18:15:15,096 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:15,184 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-15 18:15:15,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:15,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bc69d7464a14a23dab69dbf0fc0d7d26, NAME => 'Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:15,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:15,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:15,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:15,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:15,254 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:15,256 DEBUG [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/f 2023-07-15 18:15:15,256 DEBUG [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/f 2023-07-15 18:15:15,256 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bc69d7464a14a23dab69dbf0fc0d7d26 columnFamilyName f 2023-07-15 18:15:15,258 INFO [StoreOpener-bc69d7464a14a23dab69dbf0fc0d7d26-1] regionserver.HStore(310): Store=bc69d7464a14a23dab69dbf0fc0d7d26/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:15,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:15,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:15,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:15,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bc69d7464a14a23dab69dbf0fc0d7d26; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11121919040, jitterRate=0.03580942749977112}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:15,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bc69d7464a14a23dab69dbf0fc0d7d26: 2023-07-15 18:15:15,271 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26., pid=86, masterSystemTime=1689444915247 2023-07-15 18:15:15,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:15,274 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:15,274 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:15,275 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444915274"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444915274"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444915274"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444915274"}]},"ts":"1689444915274"} 2023-07-15 18:15:15,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-15 18:15:15,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,44901,1689444902054 in 181 msec 2023-07-15 18:15:15,284 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, REOPEN/MOVE in 515 msec 2023-07-15 18:15:15,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-15 18:15:15,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-15 18:15:15,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:15,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:15,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:15,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-15 18:15:15,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:15,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:42212 deadline: 1689446115774, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-15 18:15:15,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191] to rsgroup default 2023-07-15 18:15:15,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:15,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-15 18:15:15,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:15,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:15,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-15 18:15:15,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165, jenkins-hbase4.apache.org,40191,1689444902237] are moved back to bar 2023-07-15 18:15:15,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-15 18:15:15,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:15,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:15,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:15,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-15 18:15:15,791 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40191] ipc.CallRunner(144): callId: 206 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:59412 deadline: 1689444975791, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44901 startCode=1689444902054. As of locationSeqNum=6. 2023-07-15 18:15:15,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:15,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:15,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 18:15:15,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:15,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:15,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:15,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:15,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:15,938 INFO [Listener at localhost/40085] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-15 18:15:15,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-15 18:15:15,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-15 18:15:15,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-15 18:15:15,950 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444915950"}]},"ts":"1689444915950"} 2023-07-15 18:15:15,952 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-15 18:15:15,955 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-15 18:15:15,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, UNASSIGN}] 2023-07-15 18:15:15,958 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, UNASSIGN 2023-07-15 18:15:15,960 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:15,961 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444915960"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444915960"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444915960"}]},"ts":"1689444915960"} 2023-07-15 18:15:15,968 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; CloseRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:16,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-15 18:15:16,121 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:16,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bc69d7464a14a23dab69dbf0fc0d7d26, disabling compactions & flushes 2023-07-15 18:15:16,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:16,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:16,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. after waiting 0 ms 2023-07-15 18:15:16,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:16,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-15 18:15:16,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26. 2023-07-15 18:15:16,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bc69d7464a14a23dab69dbf0fc0d7d26: 2023-07-15 18:15:16,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:16,131 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=bc69d7464a14a23dab69dbf0fc0d7d26, regionState=CLOSED 2023-07-15 18:15:16,132 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689444916131"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444916131"}]},"ts":"1689444916131"} 2023-07-15 18:15:16,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-15 18:15:16,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; CloseRegionProcedure bc69d7464a14a23dab69dbf0fc0d7d26, server=jenkins-hbase4.apache.org,44901,1689444902054 in 169 msec 2023-07-15 18:15:16,137 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-15 18:15:16,137 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=bc69d7464a14a23dab69dbf0fc0d7d26, UNASSIGN in 179 msec 2023-07-15 18:15:16,138 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444916137"}]},"ts":"1689444916137"} 2023-07-15 18:15:16,139 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-15 18:15:16,141 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-15 18:15:16,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 203 msec 2023-07-15 18:15:16,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-15 18:15:16,253 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-15 18:15:16,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-15 18:15:16,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 18:15:16,256 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 18:15:16,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-15 18:15:16,257 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=90, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 18:15:16,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:16,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:16,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:16,262 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:16,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-15 18:15:16,264 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/recovered.edits] 2023-07-15 18:15:16,270 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/recovered.edits/10.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26/recovered.edits/10.seqid 2023-07-15 18:15:16,271 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testFailRemoveGroup/bc69d7464a14a23dab69dbf0fc0d7d26 2023-07-15 18:15:16,271 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-15 18:15:16,274 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=90, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 18:15:16,276 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-15 18:15:16,278 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-15 18:15:16,279 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=90, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 18:15:16,279 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-15 18:15:16,279 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444916279"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:16,281 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 18:15:16,281 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => bc69d7464a14a23dab69dbf0fc0d7d26, NAME => 'Group_testFailRemoveGroup,,1689444912991.bc69d7464a14a23dab69dbf0fc0d7d26.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 18:15:16,281 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-15 18:15:16,281 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689444916281"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:16,283 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-15 18:15:16,286 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=90, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-15 18:15:16,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 32 msec 2023-07-15 18:15:16,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-15 18:15:16,364 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-15 18:15:16,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:16,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:16,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:16,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:16,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:16,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:16,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:16,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:16,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:16,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:16,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:16,383 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:16,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:16,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:16,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:16,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:16,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:16,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:16,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:16,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:16,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:16,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446116395, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:16,395 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:16,397 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:16,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:16,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:16,398 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:16,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:16,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:16,416 INFO [Listener at localhost/40085] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=509 (was 492) Potentially hanging thread: hconnection-0x534cd145-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2054999287_17 at /127.0.0.1:40464 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-670626647-172.31.14.131-1689444896326:blk_1073741856_1032, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-670626647-172.31.14.131-1689444896326:blk_1073741856_1032, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2054999287_17 at /127.0.0.1:57922 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2054999287_17 at /127.0.0.1:50394 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955-prefix:jenkins-hbase4.apache.org,44901,1689444902054.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1367817635_17 at /127.0.0.1:57906 [Receiving block BP-670626647-172.31.14.131-1689444896326:blk_1073741856_1032] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3c71af44-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1367817635_17 at /127.0.0.1:42466 [Receiving block BP-670626647-172.31.14.131-1689444896326:blk_1073741856_1032] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-670626647-172.31.14.131-1689444896326:blk_1073741856_1032, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1367817635_17 at /127.0.0.1:50364 [Receiving block BP-670626647-172.31.14.131-1689444896326:blk_1073741856_1032] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1367817635_17 at /127.0.0.1:42472 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=789 (was 768) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=447 (was 469), ProcessCount=172 (was 172), AvailableMemoryMB=3404 (was 3738) 2023-07-15 18:15:16,417 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-15 18:15:16,433 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=509, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=447, ProcessCount=172, AvailableMemoryMB=3404 2023-07-15 18:15:16,433 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-15 18:15:16,433 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-15 18:15:16,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:16,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:16,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:16,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:16,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:16,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:16,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:16,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:16,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:16,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:16,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:16,448 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:16,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:16,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:16,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:16,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:16,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:16,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:16,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:16,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:16,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:16,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446116460, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:16,461 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:16,465 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:16,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:16,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:16,467 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:16,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:16,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:16,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:16,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:16,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_187856844 2023-07-15 18:15:16,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:16,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_187856844 2023-07-15 18:15:16,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:16,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:16,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:16,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:16,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:16,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155] to rsgroup Group_testMultiTableMove_187856844 2023-07-15 18:15:16,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:16,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_187856844 2023-07-15 18:15:16,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:16,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:16,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 18:15:16,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062] are moved back to default 2023-07-15 18:15:16,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_187856844 2023-07-15 18:15:16,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:16,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:16,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:16,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_187856844 2023-07-15 18:15:16,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:16,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:16,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 18:15:16,499 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:16,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 91 2023-07-15 18:15:16,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-15 18:15:16,501 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:16,502 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_187856844 2023-07-15 18:15:16,502 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:16,503 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:16,513 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:16,515 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,516 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b empty. 2023-07-15 18:15:16,516 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,516 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-15 18:15:16,536 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:16,537 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8bdb106d1d24c994f40f097751c0119b, NAME => 'GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:16,572 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:16,572 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 8bdb106d1d24c994f40f097751c0119b, disabling compactions & flushes 2023-07-15 18:15:16,572 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:16,572 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:16,572 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. after waiting 0 ms 2023-07-15 18:15:16,573 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:16,573 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:16,573 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 8bdb106d1d24c994f40f097751c0119b: 2023-07-15 18:15:16,576 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:16,577 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444916577"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444916577"}]},"ts":"1689444916577"} 2023-07-15 18:15:16,579 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:16,580 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:16,581 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444916581"}]},"ts":"1689444916581"} 2023-07-15 18:15:16,582 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-15 18:15:16,596 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:16,596 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:16,596 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:16,596 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:16,597 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:16,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, ASSIGN}] 2023-07-15 18:15:16,600 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, ASSIGN 2023-07-15 18:15:16,603 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39889,1689444902165; forceNewPlan=false, retain=false 2023-07-15 18:15:16,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-15 18:15:16,754 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:16,755 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=8bdb106d1d24c994f40f097751c0119b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:16,756 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444916755"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444916755"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444916755"}]},"ts":"1689444916755"} 2023-07-15 18:15:16,758 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 8bdb106d1d24c994f40f097751c0119b, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:16,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-15 18:15:16,913 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:16,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8bdb106d1d24c994f40f097751c0119b, NAME => 'GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:16,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:16,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,915 INFO [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,917 DEBUG [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/f 2023-07-15 18:15:16,917 DEBUG [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/f 2023-07-15 18:15:16,917 INFO [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8bdb106d1d24c994f40f097751c0119b columnFamilyName f 2023-07-15 18:15:16,918 INFO [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] regionserver.HStore(310): Store=8bdb106d1d24c994f40f097751c0119b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:16,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:16,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:16,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8bdb106d1d24c994f40f097751c0119b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9882975040, jitterRate=-0.07957622408866882}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:16,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8bdb106d1d24c994f40f097751c0119b: 2023-07-15 18:15:16,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b., pid=93, masterSystemTime=1689444916910 2023-07-15 18:15:16,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:16,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:16,935 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=8bdb106d1d24c994f40f097751c0119b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:16,935 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444916935"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444916935"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444916935"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444916935"}]},"ts":"1689444916935"} 2023-07-15 18:15:16,939 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-15 18:15:16,939 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 8bdb106d1d24c994f40f097751c0119b, server=jenkins-hbase4.apache.org,39889,1689444902165 in 179 msec 2023-07-15 18:15:16,941 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-15 18:15:16,942 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, ASSIGN in 342 msec 2023-07-15 18:15:16,942 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:16,942 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444916942"}]},"ts":"1689444916942"} 2023-07-15 18:15:16,944 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-15 18:15:16,948 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:16,949 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 452 msec 2023-07-15 18:15:17,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-15 18:15:17,107 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 91 completed 2023-07-15 18:15:17,107 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-15 18:15:17,107 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:17,112 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-15 18:15:17,112 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:17,112 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-15 18:15:17,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:17,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 18:15:17,117 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:17,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 94 2023-07-15 18:15:17,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-15 18:15:17,120 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:17,121 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_187856844 2023-07-15 18:15:17,122 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:17,122 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:17,128 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:17,131 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,132 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 empty. 2023-07-15 18:15:17,132 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,132 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-15 18:15:17,168 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:17,170 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9b13df6dd440e2b6fa9dbb0ab952c6c7, NAME => 'GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:17,203 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:17,203 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 9b13df6dd440e2b6fa9dbb0ab952c6c7, disabling compactions & flushes 2023-07-15 18:15:17,203 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,203 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,203 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. after waiting 0 ms 2023-07-15 18:15:17,203 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,203 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,203 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 9b13df6dd440e2b6fa9dbb0ab952c6c7: 2023-07-15 18:15:17,206 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:17,207 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444917207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444917207"}]},"ts":"1689444917207"} 2023-07-15 18:15:17,211 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:17,213 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:17,213 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444917213"}]},"ts":"1689444917213"} 2023-07-15 18:15:17,215 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-15 18:15:17,219 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:17,219 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:17,219 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:17,219 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:17,219 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:17,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-15 18:15:17,219 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, ASSIGN}] 2023-07-15 18:15:17,222 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, ASSIGN 2023-07-15 18:15:17,223 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39889,1689444902165; forceNewPlan=false, retain=false 2023-07-15 18:15:17,373 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:17,375 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=9b13df6dd440e2b6fa9dbb0ab952c6c7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:17,375 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444917375"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444917375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444917375"}]},"ts":"1689444917375"} 2023-07-15 18:15:17,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 9b13df6dd440e2b6fa9dbb0ab952c6c7, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:17,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-15 18:15:17,533 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9b13df6dd440e2b6fa9dbb0ab952c6c7, NAME => 'GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:17,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:17,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,534 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,536 INFO [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,538 DEBUG [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/f 2023-07-15 18:15:17,538 DEBUG [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/f 2023-07-15 18:15:17,538 INFO [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9b13df6dd440e2b6fa9dbb0ab952c6c7 columnFamilyName f 2023-07-15 18:15:17,539 INFO [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] regionserver.HStore(310): Store=9b13df6dd440e2b6fa9dbb0ab952c6c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:17,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:17,549 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9b13df6dd440e2b6fa9dbb0ab952c6c7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11984670720, jitterRate=0.11615943908691406}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:17,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9b13df6dd440e2b6fa9dbb0ab952c6c7: 2023-07-15 18:15:17,550 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7., pid=96, masterSystemTime=1689444917529 2023-07-15 18:15:17,554 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=9b13df6dd440e2b6fa9dbb0ab952c6c7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:17,555 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444917554"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444917554"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444917554"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444917554"}]},"ts":"1689444917554"} 2023-07-15 18:15:17,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,559 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-15 18:15:17,559 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 9b13df6dd440e2b6fa9dbb0ab952c6c7, server=jenkins-hbase4.apache.org,39889,1689444902165 in 180 msec 2023-07-15 18:15:17,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-15 18:15:17,562 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, ASSIGN in 340 msec 2023-07-15 18:15:17,562 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:17,562 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444917562"}]},"ts":"1689444917562"} 2023-07-15 18:15:17,564 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-15 18:15:17,567 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:17,568 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 453 msec 2023-07-15 18:15:17,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-15 18:15:17,722 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 94 completed 2023-07-15 18:15:17,722 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-15 18:15:17,722 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:17,727 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-15 18:15:17,727 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:17,727 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-15 18:15:17,728 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:17,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-15 18:15:17,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:17,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-15 18:15:17,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:17,747 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_187856844 2023-07-15 18:15:17,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_187856844 2023-07-15 18:15:17,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:17,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_187856844 2023-07-15 18:15:17,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:17,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:17,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_187856844 2023-07-15 18:15:17,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region 9b13df6dd440e2b6fa9dbb0ab952c6c7 to RSGroup Group_testMultiTableMove_187856844 2023-07-15 18:15:17,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, REOPEN/MOVE 2023-07-15 18:15:17,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_187856844 2023-07-15 18:15:17,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region 8bdb106d1d24c994f40f097751c0119b to RSGroup Group_testMultiTableMove_187856844 2023-07-15 18:15:17,762 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, REOPEN/MOVE 2023-07-15 18:15:17,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, REOPEN/MOVE 2023-07-15 18:15:17,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_187856844, current retry=0 2023-07-15 18:15:17,764 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=9b13df6dd440e2b6fa9dbb0ab952c6c7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:17,764 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, REOPEN/MOVE 2023-07-15 18:15:17,764 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444917763"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444917763"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444917763"}]},"ts":"1689444917763"} 2023-07-15 18:15:17,765 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=8bdb106d1d24c994f40f097751c0119b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:17,765 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444917764"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444917764"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444917764"}]},"ts":"1689444917764"} 2023-07-15 18:15:17,765 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=97, state=RUNNABLE; CloseRegionProcedure 9b13df6dd440e2b6fa9dbb0ab952c6c7, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:17,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=100, ppid=98, state=RUNNABLE; CloseRegionProcedure 8bdb106d1d24c994f40f097751c0119b, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:17,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9b13df6dd440e2b6fa9dbb0ab952c6c7, disabling compactions & flushes 2023-07-15 18:15:17,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. after waiting 0 ms 2023-07-15 18:15:17,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,926 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:17,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:17,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9b13df6dd440e2b6fa9dbb0ab952c6c7: 2023-07-15 18:15:17,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9b13df6dd440e2b6fa9dbb0ab952c6c7 move to jenkins-hbase4.apache.org,37155,1689444906062 record at close sequenceid=2 2023-07-15 18:15:17,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:17,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:17,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8bdb106d1d24c994f40f097751c0119b, disabling compactions & flushes 2023-07-15 18:15:17,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:17,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:17,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. after waiting 0 ms 2023-07-15 18:15:17,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:17,930 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=9b13df6dd440e2b6fa9dbb0ab952c6c7, regionState=CLOSED 2023-07-15 18:15:17,930 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444917930"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444917930"}]},"ts":"1689444917930"} 2023-07-15 18:15:17,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=97 2023-07-15 18:15:17,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:17,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=97, state=SUCCESS; CloseRegionProcedure 9b13df6dd440e2b6fa9dbb0ab952c6c7, server=jenkins-hbase4.apache.org,39889,1689444902165 in 167 msec 2023-07-15 18:15:17,936 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37155,1689444906062; forceNewPlan=false, retain=false 2023-07-15 18:15:17,936 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:17,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8bdb106d1d24c994f40f097751c0119b: 2023-07-15 18:15:17,936 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8bdb106d1d24c994f40f097751c0119b move to jenkins-hbase4.apache.org,37155,1689444906062 record at close sequenceid=2 2023-07-15 18:15:17,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:17,938 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=8bdb106d1d24c994f40f097751c0119b, regionState=CLOSED 2023-07-15 18:15:17,938 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444917938"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444917938"}]},"ts":"1689444917938"} 2023-07-15 18:15:17,941 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=100, resume processing ppid=98 2023-07-15 18:15:17,941 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, ppid=98, state=SUCCESS; CloseRegionProcedure 8bdb106d1d24c994f40f097751c0119b, server=jenkins-hbase4.apache.org,39889,1689444902165 in 172 msec 2023-07-15 18:15:17,942 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37155,1689444906062; forceNewPlan=false, retain=false 2023-07-15 18:15:18,086 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=9b13df6dd440e2b6fa9dbb0ab952c6c7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:18,086 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=8bdb106d1d24c994f40f097751c0119b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:18,087 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444918086"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444918086"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444918086"}]},"ts":"1689444918086"} 2023-07-15 18:15:18,087 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444918086"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444918086"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444918086"}]},"ts":"1689444918086"} 2023-07-15 18:15:18,089 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=97, state=RUNNABLE; OpenRegionProcedure 9b13df6dd440e2b6fa9dbb0ab952c6c7, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:18,090 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=98, state=RUNNABLE; OpenRegionProcedure 8bdb106d1d24c994f40f097751c0119b, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:18,305 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:18,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8bdb106d1d24c994f40f097751c0119b, NAME => 'GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:18,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:18,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,308 INFO [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,309 DEBUG [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/f 2023-07-15 18:15:18,310 DEBUG [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/f 2023-07-15 18:15:18,310 INFO [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8bdb106d1d24c994f40f097751c0119b columnFamilyName f 2023-07-15 18:15:18,311 INFO [StoreOpener-8bdb106d1d24c994f40f097751c0119b-1] regionserver.HStore(310): Store=8bdb106d1d24c994f40f097751c0119b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:18,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8bdb106d1d24c994f40f097751c0119b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9510179680, jitterRate=-0.11429549753665924}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:18,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8bdb106d1d24c994f40f097751c0119b: 2023-07-15 18:15:18,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b., pid=102, masterSystemTime=1689444918300 2023-07-15 18:15:18,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:18,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:18,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:18,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9b13df6dd440e2b6fa9dbb0ab952c6c7, NAME => 'GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:18,322 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=8bdb106d1d24c994f40f097751c0119b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:18,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:18,322 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444918322"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444918322"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444918322"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444918322"}]},"ts":"1689444918322"} 2023-07-15 18:15:18,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:18,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:18,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:18,324 INFO [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:18,325 DEBUG [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/f 2023-07-15 18:15:18,325 DEBUG [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/f 2023-07-15 18:15:18,326 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=98 2023-07-15 18:15:18,326 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=98, state=SUCCESS; OpenRegionProcedure 8bdb106d1d24c994f40f097751c0119b, server=jenkins-hbase4.apache.org,37155,1689444906062 in 234 msec 2023-07-15 18:15:18,326 INFO [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9b13df6dd440e2b6fa9dbb0ab952c6c7 columnFamilyName f 2023-07-15 18:15:18,327 INFO [StoreOpener-9b13df6dd440e2b6fa9dbb0ab952c6c7-1] regionserver.HStore(310): Store=9b13df6dd440e2b6fa9dbb0ab952c6c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:18,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, REOPEN/MOVE in 565 msec 2023-07-15 18:15:18,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:18,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:18,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:18,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9b13df6dd440e2b6fa9dbb0ab952c6c7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10623420960, jitterRate=-0.01061682403087616}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:18,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9b13df6dd440e2b6fa9dbb0ab952c6c7: 2023-07-15 18:15:18,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7., pid=101, masterSystemTime=1689444918300 2023-07-15 18:15:18,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:18,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:18,344 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=9b13df6dd440e2b6fa9dbb0ab952c6c7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:18,344 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444918344"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444918344"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444918344"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444918344"}]},"ts":"1689444918344"} 2023-07-15 18:15:18,351 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=97 2023-07-15 18:15:18,351 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=97, state=SUCCESS; OpenRegionProcedure 9b13df6dd440e2b6fa9dbb0ab952c6c7, server=jenkins-hbase4.apache.org,37155,1689444906062 in 258 msec 2023-07-15 18:15:18,353 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, REOPEN/MOVE in 592 msec 2023-07-15 18:15:18,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=97 2023-07-15 18:15:18,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_187856844. 2023-07-15 18:15:18,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:18,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:18,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:18,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-15 18:15:18,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:18,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-15 18:15:18,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:18,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:18,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:18,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_187856844 2023-07-15 18:15:18,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:18,779 INFO [Listener at localhost/40085] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-15 18:15:18,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-15 18:15:18,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 18:15:18,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-15 18:15:18,786 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444918786"}]},"ts":"1689444918786"} 2023-07-15 18:15:18,788 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-15 18:15:18,796 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-15 18:15:18,797 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, UNASSIGN}] 2023-07-15 18:15:18,799 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, UNASSIGN 2023-07-15 18:15:18,800 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=8bdb106d1d24c994f40f097751c0119b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:18,800 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444918799"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444918799"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444918799"}]},"ts":"1689444918799"} 2023-07-15 18:15:18,801 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; CloseRegionProcedure 8bdb106d1d24c994f40f097751c0119b, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:18,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-15 18:15:18,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8bdb106d1d24c994f40f097751c0119b, disabling compactions & flushes 2023-07-15 18:15:18,955 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:18,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:18,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. after waiting 0 ms 2023-07-15 18:15:18,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:18,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:18,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b. 2023-07-15 18:15:18,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8bdb106d1d24c994f40f097751c0119b: 2023-07-15 18:15:18,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:18,966 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=8bdb106d1d24c994f40f097751c0119b, regionState=CLOSED 2023-07-15 18:15:18,967 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444918966"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444918966"}]},"ts":"1689444918966"} 2023-07-15 18:15:18,973 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-15 18:15:18,973 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; CloseRegionProcedure 8bdb106d1d24c994f40f097751c0119b, server=jenkins-hbase4.apache.org,37155,1689444906062 in 170 msec 2023-07-15 18:15:18,975 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-15 18:15:18,975 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=8bdb106d1d24c994f40f097751c0119b, UNASSIGN in 176 msec 2023-07-15 18:15:18,976 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444918976"}]},"ts":"1689444918976"} 2023-07-15 18:15:18,977 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-15 18:15:18,980 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-15 18:15:18,982 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 202 msec 2023-07-15 18:15:19,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-15 18:15:19,087 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-15 18:15:19,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-15 18:15:19,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 18:15:19,091 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 18:15:19,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_187856844' 2023-07-15 18:15:19,093 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=106, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 18:15:19,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_187856844 2023-07-15 18:15:19,099 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:19,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,101 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/recovered.edits] 2023-07-15 18:15:19,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-15 18:15:19,108 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/recovered.edits/7.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b/recovered.edits/7.seqid 2023-07-15 18:15:19,109 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveA/8bdb106d1d24c994f40f097751c0119b 2023-07-15 18:15:19,109 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-15 18:15:19,118 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=106, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 18:15:19,121 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-15 18:15:19,124 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-15 18:15:19,128 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=106, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 18:15:19,128 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-15 18:15:19,128 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444919128"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:19,131 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 18:15:19,131 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8bdb106d1d24c994f40f097751c0119b, NAME => 'GrouptestMultiTableMoveA,,1689444916496.8bdb106d1d24c994f40f097751c0119b.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 18:15:19,131 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-15 18:15:19,131 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689444919131"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:19,133 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-15 18:15:19,135 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=106, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-15 18:15:19,137 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 47 msec 2023-07-15 18:15:19,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-15 18:15:19,206 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-15 18:15:19,206 INFO [Listener at localhost/40085] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-15 18:15:19,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-15 18:15:19,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 18:15:19,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-15 18:15:19,213 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444919213"}]},"ts":"1689444919213"} 2023-07-15 18:15:19,214 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-15 18:15:19,216 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-15 18:15:19,220 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, UNASSIGN}] 2023-07-15 18:15:19,224 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, UNASSIGN 2023-07-15 18:15:19,224 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=9b13df6dd440e2b6fa9dbb0ab952c6c7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:19,224 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444919224"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444919224"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444919224"}]},"ts":"1689444919224"} 2023-07-15 18:15:19,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure 9b13df6dd440e2b6fa9dbb0ab952c6c7, server=jenkins-hbase4.apache.org,37155,1689444906062}] 2023-07-15 18:15:19,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-15 18:15:19,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:19,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9b13df6dd440e2b6fa9dbb0ab952c6c7, disabling compactions & flushes 2023-07-15 18:15:19,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:19,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:19,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. after waiting 0 ms 2023-07-15 18:15:19,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:19,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:19,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7. 2023-07-15 18:15:19,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9b13df6dd440e2b6fa9dbb0ab952c6c7: 2023-07-15 18:15:19,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:19,391 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=9b13df6dd440e2b6fa9dbb0ab952c6c7, regionState=CLOSED 2023-07-15 18:15:19,391 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689444919391"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444919391"}]},"ts":"1689444919391"} 2023-07-15 18:15:19,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-15 18:15:19,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure 9b13df6dd440e2b6fa9dbb0ab952c6c7, server=jenkins-hbase4.apache.org,37155,1689444906062 in 167 msec 2023-07-15 18:15:19,396 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-15 18:15:19,396 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9b13df6dd440e2b6fa9dbb0ab952c6c7, UNASSIGN in 177 msec 2023-07-15 18:15:19,396 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444919396"}]},"ts":"1689444919396"} 2023-07-15 18:15:19,398 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-15 18:15:19,406 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-15 18:15:19,409 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 199 msec 2023-07-15 18:15:19,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-15 18:15:19,515 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-15 18:15:19,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-15 18:15:19,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 18:15:19,519 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 18:15:19,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_187856844' 2023-07-15 18:15:19,520 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 18:15:19,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_187856844 2023-07-15 18:15:19,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,525 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:19,527 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/recovered.edits] 2023-07-15 18:15:19,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-15 18:15:19,534 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/recovered.edits/7.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7/recovered.edits/7.seqid 2023-07-15 18:15:19,535 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/GrouptestMultiTableMoveB/9b13df6dd440e2b6fa9dbb0ab952c6c7 2023-07-15 18:15:19,535 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-15 18:15:19,537 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 18:15:19,540 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-15 18:15:19,542 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-15 18:15:19,547 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 18:15:19,547 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-15 18:15:19,547 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444919547"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:19,549 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 18:15:19,549 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9b13df6dd440e2b6fa9dbb0ab952c6c7, NAME => 'GrouptestMultiTableMoveB,,1689444917113.9b13df6dd440e2b6fa9dbb0ab952c6c7.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 18:15:19,549 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-15 18:15:19,549 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689444919549"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:19,551 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-15 18:15:19,553 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-15 18:15:19,558 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 37 msec 2023-07-15 18:15:19,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-15 18:15:19,631 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-15 18:15:19,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:19,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:19,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:19,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155] to rsgroup default 2023-07-15 18:15:19,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_187856844 2023-07-15 18:15:19,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_187856844, current retry=0 2023-07-15 18:15:19,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062] are moved back to Group_testMultiTableMove_187856844 2023-07-15 18:15:19,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_187856844 => default 2023-07-15 18:15:19,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_187856844 2023-07-15 18:15:19,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 18:15:19,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:19,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:19,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:19,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:19,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:19,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:19,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:19,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:19,660 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:19,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:19,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:19,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:19,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:19,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:19,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446119672, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:19,674 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:19,676 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:19,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,677 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:19,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:19,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,697 INFO [Listener at localhost/40085] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=508 (was 509), OpenFileDescriptor=779 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=434 (was 447), ProcessCount=172 (was 172), AvailableMemoryMB=3083 (was 3404) 2023-07-15 18:15:19,697 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-15 18:15:19,713 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=508, OpenFileDescriptor=779, MaxFileDescriptor=60000, SystemLoadAverage=434, ProcessCount=172, AvailableMemoryMB=3083 2023-07-15 18:15:19,713 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-15 18:15:19,713 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-15 18:15:19,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:19,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:19,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:19,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:19,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:19,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:19,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:19,728 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:19,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:19,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:19,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:19,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:19,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:19,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446119737, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:19,738 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:19,739 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:19,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,740 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:19,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:19,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:19,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-15 18:15:19,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 18:15:19,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:19,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889] to rsgroup oldGroup 2023-07-15 18:15:19,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 18:15:19,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 18:15:19,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165] are moved back to default 2023-07-15 18:15:19,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-15 18:15:19,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-15 18:15:19,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-15 18:15:19,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:19,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-15 18:15:19,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-15 18:15:19,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 18:15:19,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:19,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:19,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40191] to rsgroup anotherRSGroup 2023-07-15 18:15:19,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-15 18:15:19,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 18:15:19,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:19,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 18:15:19,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40191,1689444902237] are moved back to default 2023-07-15 18:15:19,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-15 18:15:19,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-15 18:15:19,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-15 18:15:19,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-15 18:15:19,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:19,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:42212 deadline: 1689446119797, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-15 18:15:19,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-15 18:15:19,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:19,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:42212 deadline: 1689446119799, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-15 18:15:19,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-15 18:15:19,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:19,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:42212 deadline: 1689446119800, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-15 18:15:19,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-15 18:15:19,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:19,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:42212 deadline: 1689446119801, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-15 18:15:19,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:19,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:19,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:19,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40191] to rsgroup default 2023-07-15 18:15:19,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-15 18:15:19,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 18:15:19,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:19,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-15 18:15:19,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40191,1689444902237] are moved back to anotherRSGroup 2023-07-15 18:15:19,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-15 18:15:19,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-15 18:15:19,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 18:15:19,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-15 18:15:19,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:19,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:19,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:19,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:19,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889] to rsgroup default 2023-07-15 18:15:19,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-15 18:15:19,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-15 18:15:19,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165] are moved back to oldGroup 2023-07-15 18:15:19,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-15 18:15:19,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-15 18:15:19,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 18:15:19,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:19,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:19,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:19,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:19,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:19,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:19,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:19,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:19,849 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:19,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:19,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:19,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:19,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:19,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:19,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446119859, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:19,860 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:19,862 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:19,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,863 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:19,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:19,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,881 INFO [Listener at localhost/40085] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512 (was 508) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=779 (was 779), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=434 (was 434), ProcessCount=172 (was 172), AvailableMemoryMB=3081 (was 3083) 2023-07-15 18:15:19,881 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-15 18:15:19,900 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=512, OpenFileDescriptor=779, MaxFileDescriptor=60000, SystemLoadAverage=434, ProcessCount=172, AvailableMemoryMB=3080 2023-07-15 18:15:19,901 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-15 18:15:19,901 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-15 18:15:19,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:19,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:19,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:19,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:19,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:19,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:19,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:19,915 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:19,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:19,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:19,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:19,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:19,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:19,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446119927, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:19,928 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:19,929 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:19,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,930 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:19,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:19,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:19,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-15 18:15:19,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 18:15:19,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:19,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889] to rsgroup oldgroup 2023-07-15 18:15:19,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 18:15:19,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 18:15:19,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165] are moved back to default 2023-07-15 18:15:19,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-15 18:15:19,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:19,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:19,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:19,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-15 18:15:19,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:19,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:19,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-15 18:15:19,967 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:19,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 111 2023-07-15 18:15:19,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-15 18:15:19,974 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 18:15:19,975 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:19,975 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:19,976 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:19,979 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:19,982 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:19,983 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 empty. 2023-07-15 18:15:19,983 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:19,983 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-15 18:15:20,018 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:20,020 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0a660dd6e6dc1267929847565f5129c8, NAME => 'testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:20,038 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:20,038 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 0a660dd6e6dc1267929847565f5129c8, disabling compactions & flushes 2023-07-15 18:15:20,038 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,038 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,038 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. after waiting 0 ms 2023-07-15 18:15:20,038 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,038 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,038 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 0a660dd6e6dc1267929847565f5129c8: 2023-07-15 18:15:20,041 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:20,042 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444920042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444920042"}]},"ts":"1689444920042"} 2023-07-15 18:15:20,044 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:20,045 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:20,045 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444920045"}]},"ts":"1689444920045"} 2023-07-15 18:15:20,046 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-15 18:15:20,049 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:20,050 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:20,050 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:20,050 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:20,050 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, ASSIGN}] 2023-07-15 18:15:20,052 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, ASSIGN 2023-07-15 18:15:20,053 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:20,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-15 18:15:20,203 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:20,205 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:20,205 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444920205"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444920205"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444920205"}]},"ts":"1689444920205"} 2023-07-15 18:15:20,208 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:20,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-15 18:15:20,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a660dd6e6dc1267929847565f5129c8, NAME => 'testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:20,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:20,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,366 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,367 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,369 DEBUG [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/tr 2023-07-15 18:15:20,369 DEBUG [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/tr 2023-07-15 18:15:20,369 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a660dd6e6dc1267929847565f5129c8 columnFamilyName tr 2023-07-15 18:15:20,370 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] regionserver.HStore(310): Store=0a660dd6e6dc1267929847565f5129c8/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:20,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:20,377 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0a660dd6e6dc1267929847565f5129c8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11367112960, jitterRate=0.058644890785217285}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:20,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0a660dd6e6dc1267929847565f5129c8: 2023-07-15 18:15:20,377 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8., pid=113, masterSystemTime=1689444920362 2023-07-15 18:15:20,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,379 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,379 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:20,379 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444920379"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444920379"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444920379"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444920379"}]},"ts":"1689444920379"} 2023-07-15 18:15:20,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-15 18:15:20,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,40191,1689444902237 in 172 msec 2023-07-15 18:15:20,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-15 18:15:20,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, ASSIGN in 332 msec 2023-07-15 18:15:20,384 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:20,385 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444920384"}]},"ts":"1689444920384"} 2023-07-15 18:15:20,386 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-15 18:15:20,394 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:20,395 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=testRename in 430 msec 2023-07-15 18:15:20,435 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-15 18:15:20,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-15 18:15:20,573 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 111 completed 2023-07-15 18:15:20,573 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-15 18:15:20,573 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:20,577 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-15 18:15:20,578 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:20,578 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-15 18:15:20,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-15 18:15:20,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 18:15:20,582 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:20,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:20,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:20,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-15 18:15:20,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region 0a660dd6e6dc1267929847565f5129c8 to RSGroup oldgroup 2023-07-15 18:15:20,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:20,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:20,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:20,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:20,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:20,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, REOPEN/MOVE 2023-07-15 18:15:20,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-15 18:15:20,586 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, REOPEN/MOVE 2023-07-15 18:15:20,587 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:20,587 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444920587"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444920587"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444920587"}]},"ts":"1689444920587"} 2023-07-15 18:15:20,588 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:20,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0a660dd6e6dc1267929847565f5129c8, disabling compactions & flushes 2023-07-15 18:15:20,743 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. after waiting 0 ms 2023-07-15 18:15:20,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:20,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:20,749 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0a660dd6e6dc1267929847565f5129c8: 2023-07-15 18:15:20,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0a660dd6e6dc1267929847565f5129c8 move to jenkins-hbase4.apache.org,39889,1689444902165 record at close sequenceid=2 2023-07-15 18:15:20,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:20,751 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=CLOSED 2023-07-15 18:15:20,751 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444920751"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444920751"}]},"ts":"1689444920751"} 2023-07-15 18:15:20,762 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-15 18:15:20,762 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,40191,1689444902237 in 165 msec 2023-07-15 18:15:20,765 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39889,1689444902165; forceNewPlan=false, retain=false 2023-07-15 18:15:20,915 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:20,916 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:20,916 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444920915"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444920915"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444920915"}]},"ts":"1689444920915"} 2023-07-15 18:15:20,918 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:21,074 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:21,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a660dd6e6dc1267929847565f5129c8, NAME => 'testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:21,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:21,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:21,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:21,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:21,076 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:21,077 DEBUG [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/tr 2023-07-15 18:15:21,078 DEBUG [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/tr 2023-07-15 18:15:21,078 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a660dd6e6dc1267929847565f5129c8 columnFamilyName tr 2023-07-15 18:15:21,079 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] regionserver.HStore(310): Store=0a660dd6e6dc1267929847565f5129c8/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:21,080 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:21,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:21,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:21,086 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0a660dd6e6dc1267929847565f5129c8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9492685120, jitterRate=-0.11592480540275574}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:21,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0a660dd6e6dc1267929847565f5129c8: 2023-07-15 18:15:21,087 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8., pid=116, masterSystemTime=1689444921069 2023-07-15 18:15:21,088 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:21,088 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:21,089 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:21,089 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444921089"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444921089"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444921089"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444921089"}]},"ts":"1689444921089"} 2023-07-15 18:15:21,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-15 18:15:21,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,39889,1689444902165 in 172 msec 2023-07-15 18:15:21,093 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, REOPEN/MOVE in 507 msec 2023-07-15 18:15:21,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-15 18:15:21,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-15 18:15:21,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:21,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:21,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:21,593 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:21,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-15 18:15:21,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:21,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-15 18:15:21,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:21,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-15 18:15:21,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:21,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:21,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:21,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-15 18:15:21,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 18:15:21,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 18:15:21,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:21,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:21,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:21,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:21,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:21,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:21,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40191] to rsgroup normal 2023-07-15 18:15:21,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 18:15:21,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 18:15:21,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:21,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:21,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:21,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 18:15:21,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40191,1689444902237] are moved back to default 2023-07-15 18:15:21,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-15 18:15:21,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:21,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:21,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:21,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-15 18:15:21,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:21,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:21,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-15 18:15:21,639 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:21,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 117 2023-07-15 18:15:21,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-15 18:15:21,641 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 18:15:21,641 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 18:15:21,642 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:21,642 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:21,643 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:21,645 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:21,646 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:21,647 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b empty. 2023-07-15 18:15:21,647 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:21,647 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-15 18:15:21,663 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:21,665 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => a0c094ba580bcbf508d170378db1325b, NAME => 'unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:21,685 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:21,685 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing a0c094ba580bcbf508d170378db1325b, disabling compactions & flushes 2023-07-15 18:15:21,685 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:21,685 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:21,685 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. after waiting 0 ms 2023-07-15 18:15:21,685 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:21,685 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:21,685 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for a0c094ba580bcbf508d170378db1325b: 2023-07-15 18:15:21,688 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:21,689 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444921689"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444921689"}]},"ts":"1689444921689"} 2023-07-15 18:15:21,691 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:21,692 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:21,692 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444921692"}]},"ts":"1689444921692"} 2023-07-15 18:15:21,694 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-15 18:15:21,698 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, ASSIGN}] 2023-07-15 18:15:21,701 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, ASSIGN 2023-07-15 18:15:21,702 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:21,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-15 18:15:21,854 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:21,854 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444921854"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444921854"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444921854"}]},"ts":"1689444921854"} 2023-07-15 18:15:21,856 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:21,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-15 18:15:22,012 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a0c094ba580bcbf508d170378db1325b, NAME => 'unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:22,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:22,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,014 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,015 DEBUG [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/ut 2023-07-15 18:15:22,015 DEBUG [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/ut 2023-07-15 18:15:22,016 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a0c094ba580bcbf508d170378db1325b columnFamilyName ut 2023-07-15 18:15:22,016 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] regionserver.HStore(310): Store=a0c094ba580bcbf508d170378db1325b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:22,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:22,023 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a0c094ba580bcbf508d170378db1325b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10555904640, jitterRate=-0.016904771327972412}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:22,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a0c094ba580bcbf508d170378db1325b: 2023-07-15 18:15:22,024 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b., pid=119, masterSystemTime=1689444922008 2023-07-15 18:15:22,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,026 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:22,026 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444922025"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444922025"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444922025"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444922025"}]},"ts":"1689444922025"} 2023-07-15 18:15:22,029 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-15 18:15:22,029 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,44901,1689444902054 in 171 msec 2023-07-15 18:15:22,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-15 18:15:22,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, ASSIGN in 331 msec 2023-07-15 18:15:22,031 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:22,032 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444922032"}]},"ts":"1689444922032"} 2023-07-15 18:15:22,033 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-15 18:15:22,035 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:22,036 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=unmovedTable in 399 msec 2023-07-15 18:15:22,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-15 18:15:22,243 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 117 completed 2023-07-15 18:15:22,244 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-15 18:15:22,244 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:22,248 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-15 18:15:22,248 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:22,248 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-15 18:15:22,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-15 18:15:22,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-15 18:15:22,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 18:15:22,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:22,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:22,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:22,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-15 18:15:22,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region a0c094ba580bcbf508d170378db1325b to RSGroup normal 2023-07-15 18:15:22,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, REOPEN/MOVE 2023-07-15 18:15:22,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-15 18:15:22,259 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, REOPEN/MOVE 2023-07-15 18:15:22,260 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:22,260 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444922260"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444922260"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444922260"}]},"ts":"1689444922260"} 2023-07-15 18:15:22,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:22,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a0c094ba580bcbf508d170378db1325b, disabling compactions & flushes 2023-07-15 18:15:22,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. after waiting 0 ms 2023-07-15 18:15:22,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:22,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a0c094ba580bcbf508d170378db1325b: 2023-07-15 18:15:22,421 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a0c094ba580bcbf508d170378db1325b move to jenkins-hbase4.apache.org,40191,1689444902237 record at close sequenceid=2 2023-07-15 18:15:22,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,423 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=CLOSED 2023-07-15 18:15:22,423 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444922422"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444922422"}]},"ts":"1689444922422"} 2023-07-15 18:15:22,432 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-15 18:15:22,432 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,44901,1689444902054 in 163 msec 2023-07-15 18:15:22,432 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:22,439 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-15 18:15:22,583 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:22,583 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444922583"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444922583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444922583"}]},"ts":"1689444922583"} 2023-07-15 18:15:22,585 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:22,741 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a0c094ba580bcbf508d170378db1325b, NAME => 'unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:22,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:22,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,744 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,745 DEBUG [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/ut 2023-07-15 18:15:22,745 DEBUG [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/ut 2023-07-15 18:15:22,745 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a0c094ba580bcbf508d170378db1325b columnFamilyName ut 2023-07-15 18:15:22,746 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] regionserver.HStore(310): Store=a0c094ba580bcbf508d170378db1325b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:22,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:22,752 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a0c094ba580bcbf508d170378db1325b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9767335040, jitterRate=-0.09034603834152222}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:22,752 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a0c094ba580bcbf508d170378db1325b: 2023-07-15 18:15:22,752 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b., pid=122, masterSystemTime=1689444922737 2023-07-15 18:15:22,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,754 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:22,754 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:22,755 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444922754"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444922754"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444922754"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444922754"}]},"ts":"1689444922754"} 2023-07-15 18:15:22,757 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-15 18:15:22,757 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,40191,1689444902237 in 171 msec 2023-07-15 18:15:22,759 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, REOPEN/MOVE in 501 msec 2023-07-15 18:15:23,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-15 18:15:23,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-15 18:15:23,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:23,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:23,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:23,266 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:23,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-15 18:15:23,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:23,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-15 18:15:23,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:23,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-15 18:15:23,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:23,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-15 18:15:23,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 18:15:23,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:23,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:23,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 18:15:23,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-15 18:15:23,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-15 18:15:23,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:23,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:23,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-15 18:15:23,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:23,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-15 18:15:23,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:23,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-15 18:15:23,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:23,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:23,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:23,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-15 18:15:23,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 18:15:23,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:23,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:23,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 18:15:23,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:23,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-15 18:15:23,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region a0c094ba580bcbf508d170378db1325b to RSGroup default 2023-07-15 18:15:23,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, REOPEN/MOVE 2023-07-15 18:15:23,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-15 18:15:23,308 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, REOPEN/MOVE 2023-07-15 18:15:23,309 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:23,309 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444923309"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444923309"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444923309"}]},"ts":"1689444923309"} 2023-07-15 18:15:23,311 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:23,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a0c094ba580bcbf508d170378db1325b, disabling compactions & flushes 2023-07-15 18:15:23,467 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:23,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:23,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. after waiting 0 ms 2023-07-15 18:15:23,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:23,472 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:23,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:23,473 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a0c094ba580bcbf508d170378db1325b: 2023-07-15 18:15:23,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a0c094ba580bcbf508d170378db1325b move to jenkins-hbase4.apache.org,44901,1689444902054 record at close sequenceid=5 2023-07-15 18:15:23,475 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,475 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=CLOSED 2023-07-15 18:15:23,475 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444923475"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444923475"}]},"ts":"1689444923475"} 2023-07-15 18:15:23,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-15 18:15:23,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,40191,1689444902237 in 166 msec 2023-07-15 18:15:23,479 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:23,629 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:23,630 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444923629"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444923629"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444923629"}]},"ts":"1689444923629"} 2023-07-15 18:15:23,631 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:23,786 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:23,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a0c094ba580bcbf508d170378db1325b, NAME => 'unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:23,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:23,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,789 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,790 DEBUG [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/ut 2023-07-15 18:15:23,790 DEBUG [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/ut 2023-07-15 18:15:23,790 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a0c094ba580bcbf508d170378db1325b columnFamilyName ut 2023-07-15 18:15:23,791 INFO [StoreOpener-a0c094ba580bcbf508d170378db1325b-1] regionserver.HStore(310): Store=a0c094ba580bcbf508d170378db1325b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:23,791 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,796 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:23,797 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a0c094ba580bcbf508d170378db1325b; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10011091840, jitterRate=-0.06764441728591919}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:23,797 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a0c094ba580bcbf508d170378db1325b: 2023-07-15 18:15:23,798 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b., pid=125, masterSystemTime=1689444923783 2023-07-15 18:15:23,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:23,800 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:23,801 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=a0c094ba580bcbf508d170378db1325b, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:23,802 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689444923801"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444923801"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444923801"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444923801"}]},"ts":"1689444923801"} 2023-07-15 18:15:23,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-15 18:15:23,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure a0c094ba580bcbf508d170378db1325b, server=jenkins-hbase4.apache.org,44901,1689444902054 in 172 msec 2023-07-15 18:15:23,806 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=a0c094ba580bcbf508d170378db1325b, REOPEN/MOVE in 498 msec 2023-07-15 18:15:24,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-15 18:15:24,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-15 18:15:24,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:24,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40191] to rsgroup default 2023-07-15 18:15:24,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-15 18:15:24,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:24,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:24,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 18:15:24,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:24,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-15 18:15:24,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,40191,1689444902237] are moved back to normal 2023-07-15 18:15:24,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-15 18:15:24,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:24,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-15 18:15:24,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:24,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:24,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 18:15:24,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-15 18:15:24,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:24,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:24,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:24,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:24,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:24,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:24,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:24,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:24,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 18:15:24,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 18:15:24,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:24,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-15 18:15:24,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:24,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 18:15:24,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:24,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-15 18:15:24,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(345): Moving region 0a660dd6e6dc1267929847565f5129c8 to RSGroup default 2023-07-15 18:15:24,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, REOPEN/MOVE 2023-07-15 18:15:24,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-15 18:15:24,337 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, REOPEN/MOVE 2023-07-15 18:15:24,337 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:24,337 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444924337"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444924337"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444924337"}]},"ts":"1689444924337"} 2023-07-15 18:15:24,338 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,39889,1689444902165}] 2023-07-15 18:15:24,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0a660dd6e6dc1267929847565f5129c8, disabling compactions & flushes 2023-07-15 18:15:24,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:24,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:24,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. after waiting 0 ms 2023-07-15 18:15:24,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:24,504 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-15 18:15:24,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:24,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0a660dd6e6dc1267929847565f5129c8: 2023-07-15 18:15:24,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0a660dd6e6dc1267929847565f5129c8 move to jenkins-hbase4.apache.org,40191,1689444902237 record at close sequenceid=5 2023-07-15 18:15:24,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,508 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=CLOSED 2023-07-15 18:15:24,508 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444924508"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444924508"}]},"ts":"1689444924508"} 2023-07-15 18:15:24,512 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-15 18:15:24,512 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,39889,1689444902165 in 172 msec 2023-07-15 18:15:24,512 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:24,663 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:24,663 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:24,663 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444924663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444924663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444924663"}]},"ts":"1689444924663"} 2023-07-15 18:15:24,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:24,829 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:24,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0a660dd6e6dc1267929847565f5129c8, NAME => 'testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:24,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:24,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,831 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,831 DEBUG [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/tr 2023-07-15 18:15:24,832 DEBUG [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/tr 2023-07-15 18:15:24,832 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0a660dd6e6dc1267929847565f5129c8 columnFamilyName tr 2023-07-15 18:15:24,832 INFO [StoreOpener-0a660dd6e6dc1267929847565f5129c8-1] regionserver.HStore(310): Store=0a660dd6e6dc1267929847565f5129c8/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:24,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:24,837 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0a660dd6e6dc1267929847565f5129c8; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10373382720, jitterRate=-0.03390344977378845}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:24,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0a660dd6e6dc1267929847565f5129c8: 2023-07-15 18:15:24,838 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8., pid=128, masterSystemTime=1689444924825 2023-07-15 18:15:24,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:24,840 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:24,840 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=0a660dd6e6dc1267929847565f5129c8, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:24,840 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689444924840"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444924840"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444924840"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444924840"}]},"ts":"1689444924840"} 2023-07-15 18:15:24,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-15 18:15:24,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 0a660dd6e6dc1267929847565f5129c8, server=jenkins-hbase4.apache.org,40191,1689444902237 in 176 msec 2023-07-15 18:15:24,844 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=0a660dd6e6dc1267929847565f5129c8, REOPEN/MOVE in 507 msec 2023-07-15 18:15:25,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-15 18:15:25,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-15 18:15:25,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:25,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889] to rsgroup default 2023-07-15 18:15:25,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-15 18:15:25,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:25,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-15 18:15:25,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165] are moved back to newgroup 2023-07-15 18:15:25,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-15 18:15:25,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:25,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-15 18:15:25,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:25,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:25,351 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:25,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:25,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:25,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:25,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:25,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:25,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:25,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446125370, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:25,370 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:25,372 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:25,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,373 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:25,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:25,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:25,390 INFO [Listener at localhost/40085] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=502 (was 512), OpenFileDescriptor=768 (was 779), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=408 (was 434), ProcessCount=172 (was 172), AvailableMemoryMB=2901 (was 3080) 2023-07-15 18:15:25,390 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-15 18:15:25,405 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=502, OpenFileDescriptor=768, MaxFileDescriptor=60000, SystemLoadAverage=408, ProcessCount=172, AvailableMemoryMB=2900 2023-07-15 18:15:25,406 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-15 18:15:25,406 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-15 18:15:25,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:25,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:25,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:25,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:25,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:25,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:25,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:25,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:25,422 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:25,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:25,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:25,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:25,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:25,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:25,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:25,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446125432, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:25,433 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:25,435 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:25,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,436 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:25,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:25,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:25,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-15 18:15:25,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:25,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-15 18:15:25,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-15 18:15:25,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-15 18:15:25,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:25,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-15 18:15:25,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:25,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:42212 deadline: 1689446125445, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-15 18:15:25,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-15 18:15:25,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:25,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:42212 deadline: 1689446125447, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-15 18:15:25,450 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-15 18:15:25,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-15 18:15:25,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-15 18:15:25,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:25,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:42212 deadline: 1689446125454, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-15 18:15:25,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:25,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:25,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:25,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:25,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:25,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:25,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:25,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:25,468 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:25,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:25,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:25,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:25,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:25,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:25,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:25,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446125477, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:25,480 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:25,482 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:25,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,482 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:25,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:25,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:25,499 INFO [Listener at localhost/40085] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=506 (was 502) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x534cd145-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=768 (was 768), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=408 (was 408), ProcessCount=172 (was 172), AvailableMemoryMB=2900 (was 2900) 2023-07-15 18:15:25,500 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-15 18:15:25,516 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=506, OpenFileDescriptor=768, MaxFileDescriptor=60000, SystemLoadAverage=408, ProcessCount=172, AvailableMemoryMB=2898 2023-07-15 18:15:25,516 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-15 18:15:25,516 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-15 18:15:25,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:25,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:25,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:25,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:25,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:25,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:25,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:25,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:25,530 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:25,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:25,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:25,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:25,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:25,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:25,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:25,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446125540, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:25,541 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:25,542 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:25,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,543 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:25,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:25,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:25,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:25,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:25,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1285837543 2023-07-15 18:15:25,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1285837543 2023-07-15 18:15:25,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:25,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:25,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:25,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889] to rsgroup Group_testDisabledTableMove_1285837543 2023-07-15 18:15:25,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1285837543 2023-07-15 18:15:25,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:25,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:25,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-15 18:15:25,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165] are moved back to default 2023-07-15 18:15:25,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1285837543 2023-07-15 18:15:25,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:25,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:25,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:25,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1285837543 2023-07-15 18:15:25,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:25,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:25,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-15 18:15:25,576 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:25,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 129 2023-07-15 18:15:25,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-15 18:15:25,579 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:25,580 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1285837543 2023-07-15 18:15:25,580 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:25,581 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:25,583 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:25,588 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:25,588 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,588 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:25,588 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:25,588 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:25,588 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f empty. 2023-07-15 18:15:25,588 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712 empty. 2023-07-15 18:15:25,588 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf empty. 2023-07-15 18:15:25,589 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:25,589 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9 empty. 2023-07-15 18:15:25,589 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665 empty. 2023-07-15 18:15:25,589 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:25,589 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:25,590 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,590 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:25,590 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-15 18:15:25,607 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:25,609 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => c06d6a463f5602903e3e4cbe00095aaf, NAME => 'Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:25,609 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 2ba4aec25f015d7ef669fc4c98078665, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:25,609 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => d00e741de667397f4b88146f258a8a3f, NAME => 'Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:25,638 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:25,638 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing c06d6a463f5602903e3e4cbe00095aaf, disabling compactions & flushes 2023-07-15 18:15:25,638 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:25,638 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:25,638 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. after waiting 0 ms 2023-07-15 18:15:25,638 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:25,638 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:25,638 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for c06d6a463f5602903e3e4cbe00095aaf: 2023-07-15 18:15:25,639 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7c85422ad01da13ff11a6d1a06c33712, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:25,640 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:25,640 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing d00e741de667397f4b88146f258a8a3f, disabling compactions & flushes 2023-07-15 18:15:25,640 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:25,640 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:25,640 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. after waiting 0 ms 2023-07-15 18:15:25,640 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:25,640 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:25,640 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:25,640 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for d00e741de667397f4b88146f258a8a3f: 2023-07-15 18:15:25,640 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 2ba4aec25f015d7ef669fc4c98078665, disabling compactions & flushes 2023-07-15 18:15:25,641 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:25,641 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 863a0bfde3eb4a498e66e73b9ba967a9, NAME => 'Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp 2023-07-15 18:15:25,641 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:25,641 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. after waiting 0 ms 2023-07-15 18:15:25,641 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:25,641 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:25,641 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 2ba4aec25f015d7ef669fc4c98078665: 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 7c85422ad01da13ff11a6d1a06c33712, disabling compactions & flushes 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 863a0bfde3eb4a498e66e73b9ba967a9, disabling compactions & flushes 2023-07-15 18:15:25,653 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:25,653 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. after waiting 0 ms 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. after waiting 0 ms 2023-07-15 18:15:25,653 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:25,654 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:25,654 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:25,654 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:25,654 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 7c85422ad01da13ff11a6d1a06c33712: 2023-07-15 18:15:25,654 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 863a0bfde3eb4a498e66e73b9ba967a9: 2023-07-15 18:15:25,656 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:25,657 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444925657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444925657"}]},"ts":"1689444925657"} 2023-07-15 18:15:25,657 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444925657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444925657"}]},"ts":"1689444925657"} 2023-07-15 18:15:25,657 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444925657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444925657"}]},"ts":"1689444925657"} 2023-07-15 18:15:25,657 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444925657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444925657"}]},"ts":"1689444925657"} 2023-07-15 18:15:25,657 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444925657"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444925657"}]},"ts":"1689444925657"} 2023-07-15 18:15:25,659 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-15 18:15:25,660 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:25,660 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444925660"}]},"ts":"1689444925660"} 2023-07-15 18:15:25,661 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-15 18:15:25,665 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:25,665 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:25,665 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:25,665 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:25,665 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d00e741de667397f4b88146f258a8a3f, ASSIGN}, {pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c06d6a463f5602903e3e4cbe00095aaf, ASSIGN}, {pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2ba4aec25f015d7ef669fc4c98078665, ASSIGN}, {pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c85422ad01da13ff11a6d1a06c33712, ASSIGN}, {pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=863a0bfde3eb4a498e66e73b9ba967a9, ASSIGN}] 2023-07-15 18:15:25,668 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=863a0bfde3eb4a498e66e73b9ba967a9, ASSIGN 2023-07-15 18:15:25,668 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2ba4aec25f015d7ef669fc4c98078665, ASSIGN 2023-07-15 18:15:25,668 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c85422ad01da13ff11a6d1a06c33712, ASSIGN 2023-07-15 18:15:25,668 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c06d6a463f5602903e3e4cbe00095aaf, ASSIGN 2023-07-15 18:15:25,668 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=863a0bfde3eb4a498e66e73b9ba967a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:25,669 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d00e741de667397f4b88146f258a8a3f, ASSIGN 2023-07-15 18:15:25,669 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2ba4aec25f015d7ef669fc4c98078665, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:25,669 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c85422ad01da13ff11a6d1a06c33712, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:25,669 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c06d6a463f5602903e3e4cbe00095aaf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40191,1689444902237; forceNewPlan=false, retain=false 2023-07-15 18:15:25,670 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d00e741de667397f4b88146f258a8a3f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44901,1689444902054; forceNewPlan=false, retain=false 2023-07-15 18:15:25,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-15 18:15:25,819 INFO [jenkins-hbase4:41169] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-15 18:15:25,824 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=863a0bfde3eb4a498e66e73b9ba967a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:25,824 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=c06d6a463f5602903e3e4cbe00095aaf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:25,824 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=2ba4aec25f015d7ef669fc4c98078665, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:25,825 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444925824"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444925824"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444925824"}]},"ts":"1689444925824"} 2023-07-15 18:15:25,825 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444925824"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444925824"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444925824"}]},"ts":"1689444925824"} 2023-07-15 18:15:25,824 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=7c85422ad01da13ff11a6d1a06c33712, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:25,824 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=d00e741de667397f4b88146f258a8a3f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:25,825 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444925824"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444925824"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444925824"}]},"ts":"1689444925824"} 2023-07-15 18:15:25,825 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444925824"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444925824"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444925824"}]},"ts":"1689444925824"} 2023-07-15 18:15:25,824 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444925824"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444925824"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444925824"}]},"ts":"1689444925824"} 2023-07-15 18:15:25,827 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=131, state=RUNNABLE; OpenRegionProcedure c06d6a463f5602903e3e4cbe00095aaf, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:25,828 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=132, state=RUNNABLE; OpenRegionProcedure 2ba4aec25f015d7ef669fc4c98078665, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:25,831 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=133, state=RUNNABLE; OpenRegionProcedure 7c85422ad01da13ff11a6d1a06c33712, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:25,832 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=134, state=RUNNABLE; OpenRegionProcedure 863a0bfde3eb4a498e66e73b9ba967a9, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:25,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=130, state=RUNNABLE; OpenRegionProcedure d00e741de667397f4b88146f258a8a3f, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:25,869 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-15 18:15:25,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-15 18:15:25,985 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:25,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2ba4aec25f015d7ef669fc4c98078665, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-15 18:15:25,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:25,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,986 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,987 INFO [StoreOpener-2ba4aec25f015d7ef669fc4c98078665-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,989 DEBUG [StoreOpener-2ba4aec25f015d7ef669fc4c98078665-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665/f 2023-07-15 18:15:25,989 DEBUG [StoreOpener-2ba4aec25f015d7ef669fc4c98078665-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665/f 2023-07-15 18:15:25,989 INFO [StoreOpener-2ba4aec25f015d7ef669fc4c98078665-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2ba4aec25f015d7ef669fc4c98078665 columnFamilyName f 2023-07-15 18:15:25,990 INFO [StoreOpener-2ba4aec25f015d7ef669fc4c98078665-1] regionserver.HStore(310): Store=2ba4aec25f015d7ef669fc4c98078665/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:25,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,994 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:25,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7c85422ad01da13ff11a6d1a06c33712, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-15 18:15:25,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:25,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:25,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:25,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:25,995 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:25,996 INFO [StoreOpener-7c85422ad01da13ff11a6d1a06c33712-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:25,997 DEBUG [StoreOpener-7c85422ad01da13ff11a6d1a06c33712-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712/f 2023-07-15 18:15:25,997 DEBUG [StoreOpener-7c85422ad01da13ff11a6d1a06c33712-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712/f 2023-07-15 18:15:25,998 INFO [StoreOpener-7c85422ad01da13ff11a6d1a06c33712-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7c85422ad01da13ff11a6d1a06c33712 columnFamilyName f 2023-07-15 18:15:25,998 INFO [StoreOpener-7c85422ad01da13ff11a6d1a06c33712-1] regionserver.HStore(310): Store=7c85422ad01da13ff11a6d1a06c33712/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:25,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:25,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:26,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:26,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2ba4aec25f015d7ef669fc4c98078665; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11918398560, jitterRate=0.10998736321926117}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:26,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2ba4aec25f015d7ef669fc4c98078665: 2023-07-15 18:15:26,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665., pid=136, masterSystemTime=1689444925981 2023-07-15 18:15:26,002 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:26,002 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:26,002 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:26,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c06d6a463f5602903e3e4cbe00095aaf, NAME => 'Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-15 18:15:26,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:26,003 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=2ba4aec25f015d7ef669fc4c98078665, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:26,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,003 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926003"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444926003"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444926003"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444926003"}]},"ts":"1689444926003"} 2023-07-15 18:15:26,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:26,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,005 INFO [StoreOpener-c06d6a463f5602903e3e4cbe00095aaf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:26,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7c85422ad01da13ff11a6d1a06c33712; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9471308480, jitterRate=-0.11791566014289856}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:26,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7c85422ad01da13ff11a6d1a06c33712: 2023-07-15 18:15:26,006 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=132 2023-07-15 18:15:26,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712., pid=137, masterSystemTime=1689444925990 2023-07-15 18:15:26,007 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; OpenRegionProcedure 2ba4aec25f015d7ef669fc4c98078665, server=jenkins-hbase4.apache.org,40191,1689444902237 in 177 msec 2023-07-15 18:15:26,007 DEBUG [StoreOpener-c06d6a463f5602903e3e4cbe00095aaf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf/f 2023-07-15 18:15:26,007 DEBUG [StoreOpener-c06d6a463f5602903e3e4cbe00095aaf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf/f 2023-07-15 18:15:26,008 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2ba4aec25f015d7ef669fc4c98078665, ASSIGN in 342 msec 2023-07-15 18:15:26,008 INFO [StoreOpener-c06d6a463f5602903e3e4cbe00095aaf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c06d6a463f5602903e3e4cbe00095aaf columnFamilyName f 2023-07-15 18:15:26,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:26,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:26,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:26,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d00e741de667397f4b88146f258a8a3f, NAME => 'Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-15 18:15:26,009 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=7c85422ad01da13ff11a6d1a06c33712, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:26,009 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926009"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444926009"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444926009"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444926009"}]},"ts":"1689444926009"} 2023-07-15 18:15:26,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:26,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,010 INFO [StoreOpener-c06d6a463f5602903e3e4cbe00095aaf-1] regionserver.HStore(310): Store=c06d6a463f5602903e3e4cbe00095aaf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:26,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,012 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=133 2023-07-15 18:15:26,012 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=133, state=SUCCESS; OpenRegionProcedure 7c85422ad01da13ff11a6d1a06c33712, server=jenkins-hbase4.apache.org,44901,1689444902054 in 179 msec 2023-07-15 18:15:26,013 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c85422ad01da13ff11a6d1a06c33712, ASSIGN in 347 msec 2023-07-15 18:15:26,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,015 INFO [StoreOpener-d00e741de667397f4b88146f258a8a3f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:26,016 DEBUG [StoreOpener-d00e741de667397f4b88146f258a8a3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f/f 2023-07-15 18:15:26,016 DEBUG [StoreOpener-d00e741de667397f4b88146f258a8a3f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f/f 2023-07-15 18:15:26,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c06d6a463f5602903e3e4cbe00095aaf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11893579360, jitterRate=0.10767589509487152}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:26,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c06d6a463f5602903e3e4cbe00095aaf: 2023-07-15 18:15:26,016 INFO [StoreOpener-d00e741de667397f4b88146f258a8a3f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d00e741de667397f4b88146f258a8a3f columnFamilyName f 2023-07-15 18:15:26,017 INFO [StoreOpener-d00e741de667397f4b88146f258a8a3f-1] regionserver.HStore(310): Store=d00e741de667397f4b88146f258a8a3f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:26,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,018 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf., pid=135, masterSystemTime=1689444925981 2023-07-15 18:15:26,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:26,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:26,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:26,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 863a0bfde3eb4a498e66e73b9ba967a9, NAME => 'Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-15 18:15:26,021 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=c06d6a463f5602903e3e4cbe00095aaf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:26,021 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926020"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444926020"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444926020"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444926020"}]},"ts":"1689444926020"} 2023-07-15 18:15:26,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:26,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,023 INFO [StoreOpener-863a0bfde3eb4a498e66e73b9ba967a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,024 DEBUG [StoreOpener-863a0bfde3eb4a498e66e73b9ba967a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9/f 2023-07-15 18:15:26,025 DEBUG [StoreOpener-863a0bfde3eb4a498e66e73b9ba967a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9/f 2023-07-15 18:15:26,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=131 2023-07-15 18:15:26,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=131, state=SUCCESS; OpenRegionProcedure c06d6a463f5602903e3e4cbe00095aaf, server=jenkins-hbase4.apache.org,40191,1689444902237 in 195 msec 2023-07-15 18:15:26,025 INFO [StoreOpener-863a0bfde3eb4a498e66e73b9ba967a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 863a0bfde3eb4a498e66e73b9ba967a9 columnFamilyName f 2023-07-15 18:15:26,026 INFO [StoreOpener-863a0bfde3eb4a498e66e73b9ba967a9-1] regionserver.HStore(310): Store=863a0bfde3eb4a498e66e73b9ba967a9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:26,026 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c06d6a463f5602903e3e4cbe00095aaf, ASSIGN in 360 msec 2023-07-15 18:15:26,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:26,029 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d00e741de667397f4b88146f258a8a3f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10464464960, jitterRate=-0.02542075514793396}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:26,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d00e741de667397f4b88146f258a8a3f: 2023-07-15 18:15:26,030 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f., pid=139, masterSystemTime=1689444925990 2023-07-15 18:15:26,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:26,032 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:26,032 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=d00e741de667397f4b88146f258a8a3f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:26,032 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444926032"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444926032"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444926032"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444926032"}]},"ts":"1689444926032"} 2023-07-15 18:15:26,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:26,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 863a0bfde3eb4a498e66e73b9ba967a9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11346291520, jitterRate=0.056705743074417114}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:26,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 863a0bfde3eb4a498e66e73b9ba967a9: 2023-07-15 18:15:26,035 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9., pid=138, masterSystemTime=1689444925981 2023-07-15 18:15:26,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=130 2023-07-15 18:15:26,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=130, state=SUCCESS; OpenRegionProcedure d00e741de667397f4b88146f258a8a3f, server=jenkins-hbase4.apache.org,44901,1689444902054 in 201 msec 2023-07-15 18:15:26,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:26,036 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:26,037 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d00e741de667397f4b88146f258a8a3f, ASSIGN in 370 msec 2023-07-15 18:15:26,037 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=863a0bfde3eb4a498e66e73b9ba967a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:26,037 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444926037"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444926037"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444926037"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444926037"}]},"ts":"1689444926037"} 2023-07-15 18:15:26,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=134 2023-07-15 18:15:26,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; OpenRegionProcedure 863a0bfde3eb4a498e66e73b9ba967a9, server=jenkins-hbase4.apache.org,40191,1689444902237 in 206 msec 2023-07-15 18:15:26,041 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=129 2023-07-15 18:15:26,041 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=863a0bfde3eb4a498e66e73b9ba967a9, ASSIGN in 375 msec 2023-07-15 18:15:26,041 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:26,042 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444926042"}]},"ts":"1689444926042"} 2023-07-15 18:15:26,043 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-15 18:15:26,045 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:26,046 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 472 msec 2023-07-15 18:15:26,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-15 18:15:26,184 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 129 completed 2023-07-15 18:15:26,184 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-15 18:15:26,185 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:26,189 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-15 18:15:26,189 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:26,189 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-15 18:15:26,190 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:26,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-15 18:15:26,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:26,198 INFO [Listener at localhost/40085] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-15 18:15:26,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-15 18:15:26,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=140, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-15 18:15:26,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-15 18:15:26,206 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444926206"}]},"ts":"1689444926206"} 2023-07-15 18:15:26,207 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-15 18:15:26,209 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-15 18:15:26,209 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d00e741de667397f4b88146f258a8a3f, UNASSIGN}, {pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c06d6a463f5602903e3e4cbe00095aaf, UNASSIGN}, {pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2ba4aec25f015d7ef669fc4c98078665, UNASSIGN}, {pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c85422ad01da13ff11a6d1a06c33712, UNASSIGN}, {pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=863a0bfde3eb4a498e66e73b9ba967a9, UNASSIGN}] 2023-07-15 18:15:26,211 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=863a0bfde3eb4a498e66e73b9ba967a9, UNASSIGN 2023-07-15 18:15:26,212 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c85422ad01da13ff11a6d1a06c33712, UNASSIGN 2023-07-15 18:15:26,212 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2ba4aec25f015d7ef669fc4c98078665, UNASSIGN 2023-07-15 18:15:26,212 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c06d6a463f5602903e3e4cbe00095aaf, UNASSIGN 2023-07-15 18:15:26,212 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d00e741de667397f4b88146f258a8a3f, UNASSIGN 2023-07-15 18:15:26,213 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=863a0bfde3eb4a498e66e73b9ba967a9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:26,213 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=7c85422ad01da13ff11a6d1a06c33712, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:26,213 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444926213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444926213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444926213"}]},"ts":"1689444926213"} 2023-07-15 18:15:26,213 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444926213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444926213"}]},"ts":"1689444926213"} 2023-07-15 18:15:26,213 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=2ba4aec25f015d7ef669fc4c98078665, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:26,213 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=d00e741de667397f4b88146f258a8a3f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:26,213 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444926213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444926213"}]},"ts":"1689444926213"} 2023-07-15 18:15:26,213 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444926213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444926213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444926213"}]},"ts":"1689444926213"} 2023-07-15 18:15:26,213 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=c06d6a463f5602903e3e4cbe00095aaf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:26,213 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444926213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444926213"}]},"ts":"1689444926213"} 2023-07-15 18:15:26,214 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=145, state=RUNNABLE; CloseRegionProcedure 863a0bfde3eb4a498e66e73b9ba967a9, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:26,215 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=144, state=RUNNABLE; CloseRegionProcedure 7c85422ad01da13ff11a6d1a06c33712, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:26,215 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=143, state=RUNNABLE; CloseRegionProcedure 2ba4aec25f015d7ef669fc4c98078665, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:26,216 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=141, state=RUNNABLE; CloseRegionProcedure d00e741de667397f4b88146f258a8a3f, server=jenkins-hbase4.apache.org,44901,1689444902054}] 2023-07-15 18:15:26,217 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=142, state=RUNNABLE; CloseRegionProcedure c06d6a463f5602903e3e4cbe00095aaf, server=jenkins-hbase4.apache.org,40191,1689444902237}] 2023-07-15 18:15:26,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-15 18:15:26,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c06d6a463f5602903e3e4cbe00095aaf, disabling compactions & flushes 2023-07-15 18:15:26,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:26,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:26,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. after waiting 0 ms 2023-07-15 18:15:26,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:26,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:26,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7c85422ad01da13ff11a6d1a06c33712, disabling compactions & flushes 2023-07-15 18:15:26,369 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:26,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:26,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. after waiting 0 ms 2023-07-15 18:15:26,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:26,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:26,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:26,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712. 2023-07-15 18:15:26,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7c85422ad01da13ff11a6d1a06c33712: 2023-07-15 18:15:26,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf. 2023-07-15 18:15:26,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c06d6a463f5602903e3e4cbe00095aaf: 2023-07-15 18:15:26,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:26,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d00e741de667397f4b88146f258a8a3f, disabling compactions & flushes 2023-07-15 18:15:26,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:26,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:26,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. after waiting 0 ms 2023-07-15 18:15:26,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:26,377 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=7c85422ad01da13ff11a6d1a06c33712, regionState=CLOSED 2023-07-15 18:15:26,377 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926377"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444926377"}]},"ts":"1689444926377"} 2023-07-15 18:15:26,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:26,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2ba4aec25f015d7ef669fc4c98078665, disabling compactions & flushes 2023-07-15 18:15:26,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:26,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:26,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. after waiting 0 ms 2023-07-15 18:15:26,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:26,379 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=c06d6a463f5602903e3e4cbe00095aaf, regionState=CLOSED 2023-07-15 18:15:26,379 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926379"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444926379"}]},"ts":"1689444926379"} 2023-07-15 18:15:26,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:26,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f. 2023-07-15 18:15:26,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d00e741de667397f4b88146f258a8a3f: 2023-07-15 18:15:26,382 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=144 2023-07-15 18:15:26,382 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=144, state=SUCCESS; CloseRegionProcedure 7c85422ad01da13ff11a6d1a06c33712, server=jenkins-hbase4.apache.org,44901,1689444902054 in 165 msec 2023-07-15 18:15:26,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:26,383 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=142 2023-07-15 18:15:26,383 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=142, state=SUCCESS; CloseRegionProcedure c06d6a463f5602903e3e4cbe00095aaf, server=jenkins-hbase4.apache.org,40191,1689444902237 in 163 msec 2023-07-15 18:15:26,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7c85422ad01da13ff11a6d1a06c33712, UNASSIGN in 173 msec 2023-07-15 18:15:26,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665. 2023-07-15 18:15:26,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2ba4aec25f015d7ef669fc4c98078665: 2023-07-15 18:15:26,384 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=d00e741de667397f4b88146f258a8a3f, regionState=CLOSED 2023-07-15 18:15:26,384 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444926384"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444926384"}]},"ts":"1689444926384"} 2023-07-15 18:15:26,384 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c06d6a463f5602903e3e4cbe00095aaf, UNASSIGN in 174 msec 2023-07-15 18:15:26,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:26,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 863a0bfde3eb4a498e66e73b9ba967a9, disabling compactions & flushes 2023-07-15 18:15:26,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:26,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:26,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. after waiting 0 ms 2023-07-15 18:15:26,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:26,386 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=2ba4aec25f015d7ef669fc4c98078665, regionState=CLOSED 2023-07-15 18:15:26,387 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689444926386"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444926386"}]},"ts":"1689444926386"} 2023-07-15 18:15:26,389 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=141 2023-07-15 18:15:26,389 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=141, state=SUCCESS; CloseRegionProcedure d00e741de667397f4b88146f258a8a3f, server=jenkins-hbase4.apache.org,44901,1689444902054 in 169 msec 2023-07-15 18:15:26,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-15 18:15:26,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; CloseRegionProcedure 2ba4aec25f015d7ef669fc4c98078665, server=jenkins-hbase4.apache.org,40191,1689444902237 in 173 msec 2023-07-15 18:15:26,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d00e741de667397f4b88146f258a8a3f, UNASSIGN in 180 msec 2023-07-15 18:15:26,391 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2ba4aec25f015d7ef669fc4c98078665, UNASSIGN in 181 msec 2023-07-15 18:15:26,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:26,392 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9. 2023-07-15 18:15:26,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 863a0bfde3eb4a498e66e73b9ba967a9: 2023-07-15 18:15:26,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,393 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=863a0bfde3eb4a498e66e73b9ba967a9, regionState=CLOSED 2023-07-15 18:15:26,394 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689444926393"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444926393"}]},"ts":"1689444926393"} 2023-07-15 18:15:26,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=145 2023-07-15 18:15:26,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=145, state=SUCCESS; CloseRegionProcedure 863a0bfde3eb4a498e66e73b9ba967a9, server=jenkins-hbase4.apache.org,40191,1689444902237 in 183 msec 2023-07-15 18:15:26,402 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-15 18:15:26,402 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=863a0bfde3eb4a498e66e73b9ba967a9, UNASSIGN in 191 msec 2023-07-15 18:15:26,403 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444926403"}]},"ts":"1689444926403"} 2023-07-15 18:15:26,404 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-15 18:15:26,407 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-15 18:15:26,414 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 213 msec 2023-07-15 18:15:26,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-15 18:15:26,506 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 140 completed 2023-07-15 18:15:26,507 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1285837543 2023-07-15 18:15:26,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1285837543 2023-07-15 18:15:26,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:26,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1285837543 2023-07-15 18:15:26,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:26,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:26,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-15 18:15:26,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1285837543, current retry=0 2023-07-15 18:15:26,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1285837543. 2023-07-15 18:15:26,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:26,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:26,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:26,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-15 18:15:26,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:26,524 INFO [Listener at localhost/40085] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-15 18:15:26,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-15 18:15:26,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:26,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 919 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:42212 deadline: 1689444986524, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-15 18:15:26,526 DEBUG [Listener at localhost/40085] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-15 18:15:26,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-15 18:15:26,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] procedure2.ProcedureExecutor(1029): Stored pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 18:15:26,530 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 18:15:26,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1285837543' 2023-07-15 18:15:26,531 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=152, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 18:15:26,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:26,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1285837543 2023-07-15 18:15:26,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:26,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:26,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-15 18:15:26,542 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,542 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:26,542 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,542 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:26,542 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,549 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f/recovered.edits] 2023-07-15 18:15:26,549 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712/recovered.edits] 2023-07-15 18:15:26,549 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9/recovered.edits] 2023-07-15 18:15:26,550 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf/recovered.edits] 2023-07-15 18:15:26,550 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665/f, FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665/recovered.edits] 2023-07-15 18:15:26,558 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712/recovered.edits/4.seqid 2023-07-15 18:15:26,559 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/7c85422ad01da13ff11a6d1a06c33712 2023-07-15 18:15:26,560 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f/recovered.edits/4.seqid 2023-07-15 18:15:26,561 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9/recovered.edits/4.seqid 2023-07-15 18:15:26,561 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf/recovered.edits/4.seqid 2023-07-15 18:15:26,561 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/d00e741de667397f4b88146f258a8a3f 2023-07-15 18:15:26,562 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/863a0bfde3eb4a498e66e73b9ba967a9 2023-07-15 18:15:26,562 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/c06d6a463f5602903e3e4cbe00095aaf 2023-07-15 18:15:26,563 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665/recovered.edits/4.seqid to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/archive/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665/recovered.edits/4.seqid 2023-07-15 18:15:26,564 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/.tmp/data/default/Group_testDisabledTableMove/2ba4aec25f015d7ef669fc4c98078665 2023-07-15 18:15:26,565 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-15 18:15:26,568 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=152, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 18:15:26,570 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-15 18:15:26,576 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-15 18:15:26,577 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=152, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 18:15:26,577 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-15 18:15:26,577 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444926577"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:26,577 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444926577"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:26,578 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444926577"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:26,578 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444926577"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:26,578 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444926577"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:26,580 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-15 18:15:26,580 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => d00e741de667397f4b88146f258a8a3f, NAME => 'Group_testDisabledTableMove,,1689444925573.d00e741de667397f4b88146f258a8a3f.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => c06d6a463f5602903e3e4cbe00095aaf, NAME => 'Group_testDisabledTableMove,aaaaa,1689444925573.c06d6a463f5602903e3e4cbe00095aaf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 2ba4aec25f015d7ef669fc4c98078665, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689444925573.2ba4aec25f015d7ef669fc4c98078665.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 7c85422ad01da13ff11a6d1a06c33712, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689444925573.7c85422ad01da13ff11a6d1a06c33712.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 863a0bfde3eb4a498e66e73b9ba967a9, NAME => 'Group_testDisabledTableMove,zzzzz,1689444925573.863a0bfde3eb4a498e66e73b9ba967a9.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-15 18:15:26,580 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-15 18:15:26,580 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689444926580"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:26,582 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-15 18:15:26,584 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=152, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-15 18:15:26,585 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 57 msec 2023-07-15 18:15:26,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-15 18:15:26,642 INFO [Listener at localhost/40085] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 152 completed 2023-07-15 18:15:26,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:26,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:26,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:26,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:26,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:26,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889] to rsgroup default 2023-07-15 18:15:26,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:26,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1285837543 2023-07-15 18:15:26,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:26,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:26,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1285837543, current retry=0 2023-07-15 18:15:26,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37155,1689444906062, jenkins-hbase4.apache.org,39889,1689444902165] are moved back to Group_testDisabledTableMove_1285837543 2023-07-15 18:15:26,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1285837543 => default 2023-07-15 18:15:26,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:26,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1285837543 2023-07-15 18:15:26,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:26,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:26,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 18:15:26,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:26,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:26,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:26,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:26,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:26,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:26,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:26,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:26,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:26,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:26,672 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:26,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:26,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:26,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:26,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:26,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:26,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:26,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:26,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:26,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:26,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 953 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446126684, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:26,685 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:26,686 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:26,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:26,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:26,687 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:26,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:26,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:26,709 INFO [Listener at localhost/40085] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=510 (was 506) Potentially hanging thread: hconnection-0x3c71af44-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3d1b204c-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1367817635_17 at /127.0.0.1:42490 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=794 (was 768) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=408 (was 408), ProcessCount=172 (was 172), AvailableMemoryMB=2884 (was 2898) 2023-07-15 18:15:26,709 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-15 18:15:26,727 INFO [Listener at localhost/40085] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=510, OpenFileDescriptor=794, MaxFileDescriptor=60000, SystemLoadAverage=408, ProcessCount=172, AvailableMemoryMB=2884 2023-07-15 18:15:26,727 WARN [Listener at localhost/40085] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-15 18:15:26,728 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-15 18:15:26,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:26,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:26,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:26,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:26,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:26,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:26,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:26,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:26,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:26,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:26,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:26,741 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:26,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:26,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:26,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:26,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:26,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:26,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:26,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:26,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41169] to rsgroup master 2023-07-15 18:15:26,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:26,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] ipc.CallRunner(144): callId: 981 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42212 deadline: 1689446126754, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. 2023-07-15 18:15:26,754 WARN [Listener at localhost/40085] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41169 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:26,756 INFO [Listener at localhost/40085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:26,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:26,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:26,757 INFO [Listener at localhost/40085] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37155, jenkins-hbase4.apache.org:39889, jenkins-hbase4.apache.org:40191, jenkins-hbase4.apache.org:44901], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:26,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:26,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41169] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:26,758 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-15 18:15:26,758 INFO [Listener at localhost/40085] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-15 18:15:26,758 DEBUG [Listener at localhost/40085] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x16d27b18 to 127.0.0.1:54099 2023-07-15 18:15:26,758 DEBUG [Listener at localhost/40085] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:26,760 DEBUG [Listener at localhost/40085] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-15 18:15:26,760 DEBUG [Listener at localhost/40085] util.JVMClusterUtil(257): Found active master hash=206239472, stopped=false 2023-07-15 18:15:26,760 DEBUG [Listener at localhost/40085] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 18:15:26,760 DEBUG [Listener at localhost/40085] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 18:15:26,761 INFO [Listener at localhost/40085] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:26,762 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:26,762 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:26,762 INFO [Listener at localhost/40085] procedure2.ProcedureExecutor(629): Stopping 2023-07-15 18:15:26,762 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:26,762 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:26,762 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:26,762 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:26,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:26,763 DEBUG [Listener at localhost/40085] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4f904202 to 127.0.0.1:54099 2023-07-15 18:15:26,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:26,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:26,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:26,763 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:26,763 DEBUG [Listener at localhost/40085] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:26,764 INFO [Listener at localhost/40085] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44901,1689444902054' ***** 2023-07-15 18:15:26,764 INFO [Listener at localhost/40085] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:26,764 INFO [Listener at localhost/40085] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39889,1689444902165' ***** 2023-07-15 18:15:26,765 INFO [Listener at localhost/40085] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:26,764 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:26,765 INFO [Listener at localhost/40085] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40191,1689444902237' ***** 2023-07-15 18:15:26,765 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:26,765 INFO [Listener at localhost/40085] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:26,766 INFO [Listener at localhost/40085] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37155,1689444906062' ***** 2023-07-15 18:15:26,766 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:26,769 INFO [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:26,769 INFO [Listener at localhost/40085] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:26,786 INFO [RS:2;jenkins-hbase4:40191] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@57d756aa{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:26,786 INFO [RS:3;jenkins-hbase4:37155] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@42399250{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:26,786 INFO [RS:1;jenkins-hbase4:39889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@35ca6370{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:26,786 INFO [RS:0;jenkins-hbase4:44901] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5fac79e5{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:26,791 INFO [RS:1;jenkins-hbase4:39889] server.AbstractConnector(383): Stopped ServerConnector@402f020d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:26,791 INFO [RS:2;jenkins-hbase4:40191] server.AbstractConnector(383): Stopped ServerConnector@11b78595{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:26,791 INFO [RS:0;jenkins-hbase4:44901] server.AbstractConnector(383): Stopped ServerConnector@4f218500{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:26,792 INFO [RS:3;jenkins-hbase4:37155] server.AbstractConnector(383): Stopped ServerConnector@57fbd536{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:26,792 INFO [RS:2;jenkins-hbase4:40191] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:26,792 INFO [RS:1;jenkins-hbase4:39889] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:26,792 INFO [RS:3;jenkins-hbase4:37155] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:26,792 INFO [RS:0;jenkins-hbase4:44901] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:26,793 INFO [RS:2;jenkins-hbase4:40191] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ad0f308{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:26,793 INFO [RS:1;jenkins-hbase4:39889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d567319{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:26,795 INFO [RS:3;jenkins-hbase4:37155] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@31b086ec{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:26,799 INFO [RS:0;jenkins-hbase4:44901] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78c50a{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:26,799 INFO [RS:1;jenkins-hbase4:39889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e324a0c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:26,799 INFO [RS:2;jenkins-hbase4:40191] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@265ffb95{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:26,799 INFO [RS:3;jenkins-hbase4:37155] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5cf91eb3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:26,800 INFO [RS:0;jenkins-hbase4:44901] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4e7f1c55{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:26,803 INFO [RS:3;jenkins-hbase4:37155] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:26,803 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:26,803 INFO [RS:3;jenkins-hbase4:37155] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:26,803 INFO [RS:3;jenkins-hbase4:37155] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:26,803 INFO [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:26,803 DEBUG [RS:3;jenkins-hbase4:37155] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7095516b to 127.0.0.1:54099 2023-07-15 18:15:26,804 DEBUG [RS:3;jenkins-hbase4:37155] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:26,804 INFO [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37155,1689444906062; all regions closed. 2023-07-15 18:15:26,804 INFO [RS:2;jenkins-hbase4:40191] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:26,804 INFO [RS:1;jenkins-hbase4:39889] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:26,804 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:26,804 INFO [RS:2;jenkins-hbase4:40191] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:26,804 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:26,804 INFO [RS:2;jenkins-hbase4:40191] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:26,804 INFO [RS:1;jenkins-hbase4:39889] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:26,805 INFO [RS:1;jenkins-hbase4:39889] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:26,805 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(3305): Received CLOSE for 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:26,805 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:26,805 DEBUG [RS:1;jenkins-hbase4:39889] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7779fbd0 to 127.0.0.1:54099 2023-07-15 18:15:26,805 DEBUG [RS:1;jenkins-hbase4:39889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:26,805 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39889,1689444902165; all regions closed. 2023-07-15 18:15:26,805 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:26,806 DEBUG [RS:2;jenkins-hbase4:40191] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x05af3583 to 127.0.0.1:54099 2023-07-15 18:15:26,806 DEBUG [RS:2;jenkins-hbase4:40191] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:26,806 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 18:15:26,806 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1478): Online Regions={0a660dd6e6dc1267929847565f5129c8=testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8.} 2023-07-15 18:15:26,806 INFO [RS:0;jenkins-hbase4:44901] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:26,806 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:26,806 INFO [RS:0;jenkins-hbase4:44901] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:26,806 INFO [RS:0;jenkins-hbase4:44901] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:26,806 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(3305): Received CLOSE for 1c87ff5cd30bfdf1c603a34ec3bb14c0 2023-07-15 18:15:26,807 DEBUG [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1504): Waiting on 0a660dd6e6dc1267929847565f5129c8 2023-07-15 18:15:26,807 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(3305): Received CLOSE for 82724fed0e99f8e969020c075e232437 2023-07-15 18:15:26,808 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:26,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1c87ff5cd30bfdf1c603a34ec3bb14c0, disabling compactions & flushes 2023-07-15 18:15:26,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0a660dd6e6dc1267929847565f5129c8, disabling compactions & flushes 2023-07-15 18:15:26,808 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(3305): Received CLOSE for a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:26,808 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:26,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:26,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:26,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. after waiting 1 ms 2023-07-15 18:15:26,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:26,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:26,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:26,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. after waiting 0 ms 2023-07-15 18:15:26,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:26,808 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:26,808 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:26,809 DEBUG [RS:0;jenkins-hbase4:44901] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7ddd3abf to 127.0.0.1:54099 2023-07-15 18:15:26,809 DEBUG [RS:0;jenkins-hbase4:44901] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:26,809 INFO [RS:0;jenkins-hbase4:44901] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:26,809 INFO [RS:0;jenkins-hbase4:44901] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:26,810 INFO [RS:0;jenkins-hbase4:44901] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:26,810 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-15 18:15:26,815 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:26,826 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-15 18:15:26,826 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/testRename/0a660dd6e6dc1267929847565f5129c8/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-15 18:15:26,828 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 18:15:26,827 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1478): Online Regions={1c87ff5cd30bfdf1c603a34ec3bb14c0=hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0., 1588230740=hbase:meta,,1.1588230740, 82724fed0e99f8e969020c075e232437=hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437., a0c094ba580bcbf508d170378db1325b=unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b.} 2023-07-15 18:15:26,830 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 18:15:26,830 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 18:15:26,830 DEBUG [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1504): Waiting on 1588230740, 1c87ff5cd30bfdf1c603a34ec3bb14c0, 82724fed0e99f8e969020c075e232437, a0c094ba580bcbf508d170378db1325b 2023-07-15 18:15:26,830 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 18:15:26,832 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 18:15:26,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:26,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0a660dd6e6dc1267929847565f5129c8: 2023-07-15 18:15:26,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689444919963.0a660dd6e6dc1267929847565f5129c8. 2023-07-15 18:15:26,836 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.48 KB heapSize=61.13 KB 2023-07-15 18:15:26,839 DEBUG [RS:1;jenkins-hbase4:39889] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs 2023-07-15 18:15:26,839 INFO [RS:1;jenkins-hbase4:39889] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39889%2C1689444902165:(num 1689444904420) 2023-07-15 18:15:26,839 DEBUG [RS:1;jenkins-hbase4:39889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:26,839 INFO [RS:1;jenkins-hbase4:39889] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:26,840 INFO [RS:1;jenkins-hbase4:39889] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:26,840 INFO [RS:1;jenkins-hbase4:39889] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:26,840 INFO [RS:1;jenkins-hbase4:39889] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:26,840 INFO [RS:1;jenkins-hbase4:39889] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:26,841 INFO [RS:1;jenkins-hbase4:39889] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39889 2023-07-15 18:15:26,844 DEBUG [RS:3;jenkins-hbase4:37155] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs 2023-07-15 18:15:26,844 INFO [RS:3;jenkins-hbase4:37155] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37155%2C1689444906062:(num 1689444906348) 2023-07-15 18:15:26,844 DEBUG [RS:3;jenkins-hbase4:37155] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:26,844 INFO [RS:3;jenkins-hbase4:37155] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:26,851 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39889,1689444902165 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:26,853 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:26,854 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39889,1689444902165] 2023-07-15 18:15:26,854 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39889,1689444902165; numProcessing=1 2023-07-15 18:15:26,854 INFO [RS:3;jenkins-hbase4:37155] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:26,855 INFO [RS:3;jenkins-hbase4:37155] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:26,855 INFO [RS:3;jenkins-hbase4:37155] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:26,855 INFO [RS:3;jenkins-hbase4:37155] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:26,855 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:26,857 INFO [RS:3;jenkins-hbase4:37155] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37155 2023-07-15 18:15:26,858 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:26,858 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:26,858 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:26,859 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37155,1689444906062 2023-07-15 18:15:26,859 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:26,859 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39889,1689444902165 already deleted, retry=false 2023-07-15 18:15:26,860 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39889,1689444902165 expired; onlineServers=3 2023-07-15 18:15:26,861 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37155,1689444906062] 2023-07-15 18:15:26,861 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37155,1689444906062; numProcessing=2 2023-07-15 18:15:26,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/namespace/1c87ff5cd30bfdf1c603a34ec3bb14c0/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-15 18:15:26,864 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37155,1689444906062 already deleted, retry=false 2023-07-15 18:15:26,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:26,864 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37155,1689444906062 expired; onlineServers=2 2023-07-15 18:15:26,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1c87ff5cd30bfdf1c603a34ec3bb14c0: 2023-07-15 18:15:26,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689444905030.1c87ff5cd30bfdf1c603a34ec3bb14c0. 2023-07-15 18:15:26,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 82724fed0e99f8e969020c075e232437, disabling compactions & flushes 2023-07-15 18:15:26,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:26,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:26,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. after waiting 0 ms 2023-07-15 18:15:26,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:26,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 82724fed0e99f8e969020c075e232437 1/1 column families, dataSize=28.46 KB heapSize=46.80 KB 2023-07-15 18:15:26,908 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.56 KB at sequenceid=206 (bloomFilter=false), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/info/64f0f45f6eba48dd958c50af7d662da4 2023-07-15 18:15:26,917 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 64f0f45f6eba48dd958c50af7d662da4 2023-07-15 18:15:26,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.46 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437/.tmp/m/fe6ffb78216944948657e4d758cd597c 2023-07-15 18:15:26,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fe6ffb78216944948657e4d758cd597c 2023-07-15 18:15:26,929 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437/.tmp/m/fe6ffb78216944948657e4d758cd597c as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437/m/fe6ffb78216944948657e4d758cd597c 2023-07-15 18:15:26,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fe6ffb78216944948657e4d758cd597c 2023-07-15 18:15:26,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437/m/fe6ffb78216944948657e4d758cd597c, entries=28, sequenceid=95, filesize=6.1 K 2023-07-15 18:15:26,936 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.46 KB/29141, heapSize ~46.79 KB/47912, currentSize=0 B/0 for 82724fed0e99f8e969020c075e232437 in 72ms, sequenceid=95, compaction requested=false 2023-07-15 18:15:26,944 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=206 (bloomFilter=false), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/rep_barrier/7d2084bcdef94e819fedabf62d3f73a7 2023-07-15 18:15:26,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/rsgroup/82724fed0e99f8e969020c075e232437/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-15 18:15:26,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 18:15:26,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:26,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 82724fed0e99f8e969020c075e232437: 2023-07-15 18:15:26,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689444905044.82724fed0e99f8e969020c075e232437. 2023-07-15 18:15:26,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a0c094ba580bcbf508d170378db1325b, disabling compactions & flushes 2023-07-15 18:15:26,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:26,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:26,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. after waiting 0 ms 2023-07-15 18:15:26,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:26,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/default/unmovedTable/a0c094ba580bcbf508d170378db1325b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-15 18:15:26,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:26,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a0c094ba580bcbf508d170378db1325b: 2023-07-15 18:15:26,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689444921636.a0c094ba580bcbf508d170378db1325b. 2023-07-15 18:15:26,952 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d2084bcdef94e819fedabf62d3f73a7 2023-07-15 18:15:26,961 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:26,961 INFO [RS:3;jenkins-hbase4:37155] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37155,1689444906062; zookeeper connection closed. 2023-07-15 18:15:26,961 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:37155-0x1016a31dca1000b, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:26,962 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3e2576ff] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3e2576ff 2023-07-15 18:15:26,964 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=206 (bloomFilter=false), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/table/1ae5add78ea6492aababaa1230fc01cb 2023-07-15 18:15:26,970 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1ae5add78ea6492aababaa1230fc01cb 2023-07-15 18:15:26,971 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/info/64f0f45f6eba48dd958c50af7d662da4 as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info/64f0f45f6eba48dd958c50af7d662da4 2023-07-15 18:15:26,976 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 64f0f45f6eba48dd958c50af7d662da4 2023-07-15 18:15:26,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/info/64f0f45f6eba48dd958c50af7d662da4, entries=62, sequenceid=206, filesize=11.9 K 2023-07-15 18:15:26,977 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/rep_barrier/7d2084bcdef94e819fedabf62d3f73a7 as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier/7d2084bcdef94e819fedabf62d3f73a7 2023-07-15 18:15:26,984 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7d2084bcdef94e819fedabf62d3f73a7 2023-07-15 18:15:26,984 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/rep_barrier/7d2084bcdef94e819fedabf62d3f73a7, entries=8, sequenceid=206, filesize=5.8 K 2023-07-15 18:15:26,985 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/.tmp/table/1ae5add78ea6492aababaa1230fc01cb as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table/1ae5add78ea6492aababaa1230fc01cb 2023-07-15 18:15:26,994 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1ae5add78ea6492aababaa1230fc01cb 2023-07-15 18:15:26,994 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/table/1ae5add78ea6492aababaa1230fc01cb, entries=16, sequenceid=206, filesize=6.0 K 2023-07-15 18:15:26,995 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.48 KB/38382, heapSize ~61.08 KB/62544, currentSize=0 B/0 for 1588230740 in 163ms, sequenceid=206, compaction requested=false 2023-07-15 18:15:26,995 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-15 18:15:27,007 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40191,1689444902237; all regions closed. 2023-07-15 18:15:27,011 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/data/hbase/meta/1588230740/recovered.edits/209.seqid, newMaxSeqId=209, maxSeqId=94 2023-07-15 18:15:27,012 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 18:15:27,013 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:27,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 18:15:27,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:27,014 DEBUG [RS:2;jenkins-hbase4:40191] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs 2023-07-15 18:15:27,014 INFO [RS:2;jenkins-hbase4:40191] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40191%2C1689444902237.meta:.meta(num 1689444904713) 2023-07-15 18:15:27,023 DEBUG [RS:2;jenkins-hbase4:40191] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs 2023-07-15 18:15:27,023 INFO [RS:2;jenkins-hbase4:40191] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40191%2C1689444902237:(num 1689444904420) 2023-07-15 18:15:27,023 DEBUG [RS:2;jenkins-hbase4:40191] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:27,023 INFO [RS:2;jenkins-hbase4:40191] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:27,024 INFO [RS:2;jenkins-hbase4:40191] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:27,024 INFO [RS:2;jenkins-hbase4:40191] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:27,024 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:27,024 INFO [RS:2;jenkins-hbase4:40191] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:27,024 INFO [RS:2;jenkins-hbase4:40191] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:27,025 INFO [RS:2;jenkins-hbase4:40191] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40191 2023-07-15 18:15:27,029 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:27,029 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40191,1689444902237 2023-07-15 18:15:27,029 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:27,031 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40191,1689444902237] 2023-07-15 18:15:27,031 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40191,1689444902237; numProcessing=3 2023-07-15 18:15:27,031 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44901,1689444902054; all regions closed. 2023-07-15 18:15:27,032 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40191,1689444902237 already deleted, retry=false 2023-07-15 18:15:27,032 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40191,1689444902237 expired; onlineServers=1 2023-07-15 18:15:27,041 DEBUG [RS:0;jenkins-hbase4:44901] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs 2023-07-15 18:15:27,041 INFO [RS:0;jenkins-hbase4:44901] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44901%2C1689444902054.meta:.meta(num 1689444911711) 2023-07-15 18:15:27,047 DEBUG [RS:0;jenkins-hbase4:44901] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/oldWALs 2023-07-15 18:15:27,047 INFO [RS:0;jenkins-hbase4:44901] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44901%2C1689444902054:(num 1689444904420) 2023-07-15 18:15:27,047 DEBUG [RS:0;jenkins-hbase4:44901] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:27,047 INFO [RS:0;jenkins-hbase4:44901] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:27,047 INFO [RS:0;jenkins-hbase4:44901] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:27,047 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:27,048 INFO [RS:0;jenkins-hbase4:44901] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44901 2023-07-15 18:15:27,051 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44901,1689444902054 2023-07-15 18:15:27,051 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:27,052 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44901,1689444902054] 2023-07-15 18:15:27,052 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44901,1689444902054; numProcessing=4 2023-07-15 18:15:27,053 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44901,1689444902054 already deleted, retry=false 2023-07-15 18:15:27,053 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44901,1689444902054 expired; onlineServers=0 2023-07-15 18:15:27,053 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41169,1689444900240' ***** 2023-07-15 18:15:27,053 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-15 18:15:27,053 DEBUG [M:0;jenkins-hbase4:41169] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d77fccb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:27,054 INFO [M:0;jenkins-hbase4:41169] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:27,057 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:27,057 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:27,057 INFO [M:0;jenkins-hbase4:41169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@44a80d6e{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-15 18:15:27,057 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:27,057 INFO [M:0;jenkins-hbase4:41169] server.AbstractConnector(383): Stopped ServerConnector@23d12ea9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:27,057 INFO [M:0;jenkins-hbase4:41169] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:27,058 INFO [M:0;jenkins-hbase4:41169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@66ba5314{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:27,059 INFO [M:0;jenkins-hbase4:41169] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@79ec65fd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:27,059 INFO [M:0;jenkins-hbase4:41169] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41169,1689444900240 2023-07-15 18:15:27,059 INFO [M:0;jenkins-hbase4:41169] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41169,1689444900240; all regions closed. 2023-07-15 18:15:27,059 DEBUG [M:0;jenkins-hbase4:41169] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:27,059 INFO [M:0;jenkins-hbase4:41169] master.HMaster(1491): Stopping master jetty server 2023-07-15 18:15:27,060 INFO [M:0;jenkins-hbase4:41169] server.AbstractConnector(383): Stopped ServerConnector@c49e52f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:27,060 DEBUG [M:0;jenkins-hbase4:41169] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-15 18:15:27,061 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-15 18:15:27,061 DEBUG [M:0;jenkins-hbase4:41169] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-15 18:15:27,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444903972] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444903972,5,FailOnTimeoutGroup] 2023-07-15 18:15:27,061 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:27,061 INFO [M:0;jenkins-hbase4:41169] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-15 18:15:27,061 INFO [RS:1;jenkins-hbase4:39889] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39889,1689444902165; zookeeper connection closed. 2023-07-15 18:15:27,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444903967] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444903967,5,FailOnTimeoutGroup] 2023-07-15 18:15:27,061 INFO [M:0;jenkins-hbase4:41169] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-15 18:15:27,061 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:39889-0x1016a31dca10002, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:27,061 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1bd73abb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1bd73abb 2023-07-15 18:15:27,061 INFO [M:0;jenkins-hbase4:41169] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-15 18:15:27,061 DEBUG [M:0;jenkins-hbase4:41169] master.HMaster(1512): Stopping service threads 2023-07-15 18:15:27,061 INFO [M:0;jenkins-hbase4:41169] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-15 18:15:27,062 ERROR [M:0;jenkins-hbase4:41169] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-15 18:15:27,062 INFO [M:0;jenkins-hbase4:41169] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-15 18:15:27,063 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-15 18:15:27,063 DEBUG [M:0;jenkins-hbase4:41169] zookeeper.ZKUtil(398): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-15 18:15:27,063 WARN [M:0;jenkins-hbase4:41169] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-15 18:15:27,063 INFO [M:0;jenkins-hbase4:41169] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-15 18:15:27,063 INFO [M:0;jenkins-hbase4:41169] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-15 18:15:27,063 DEBUG [M:0;jenkins-hbase4:41169] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 18:15:27,063 INFO [M:0;jenkins-hbase4:41169] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:27,063 DEBUG [M:0;jenkins-hbase4:41169] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:27,064 DEBUG [M:0;jenkins-hbase4:41169] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 18:15:27,064 DEBUG [M:0;jenkins-hbase4:41169] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:27,064 INFO [M:0;jenkins-hbase4:41169] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=509.57 KB heapSize=609.63 KB 2023-07-15 18:15:27,078 INFO [M:0;jenkins-hbase4:41169] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=509.57 KB at sequenceid=1128 (bloomFilter=true), to=hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5694ae66f83740b3b5806c95d89d378c 2023-07-15 18:15:27,084 DEBUG [M:0;jenkins-hbase4:41169] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5694ae66f83740b3b5806c95d89d378c as hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5694ae66f83740b3b5806c95d89d378c 2023-07-15 18:15:27,089 INFO [M:0;jenkins-hbase4:41169] regionserver.HStore(1080): Added hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5694ae66f83740b3b5806c95d89d378c, entries=151, sequenceid=1128, filesize=26.7 K 2023-07-15 18:15:27,090 INFO [M:0;jenkins-hbase4:41169] regionserver.HRegion(2948): Finished flush of dataSize ~509.57 KB/521804, heapSize ~609.62 KB/624248, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=1128, compaction requested=false 2023-07-15 18:15:27,092 INFO [M:0;jenkins-hbase4:41169] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:27,092 DEBUG [M:0;jenkins-hbase4:41169] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:27,096 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:27,096 INFO [M:0;jenkins-hbase4:41169] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-15 18:15:27,097 INFO [M:0;jenkins-hbase4:41169] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41169 2023-07-15 18:15:27,100 DEBUG [M:0;jenkins-hbase4:41169] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41169,1689444900240 already deleted, retry=false 2023-07-15 18:15:27,662 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:27,662 INFO [M:0;jenkins-hbase4:41169] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41169,1689444900240; zookeeper connection closed. 2023-07-15 18:15:27,662 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): master:41169-0x1016a31dca10000, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:27,762 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:27,762 INFO [RS:0;jenkins-hbase4:44901] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44901,1689444902054; zookeeper connection closed. 2023-07-15 18:15:27,763 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:44901-0x1016a31dca10001, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:27,763 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@44726604] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@44726604 2023-07-15 18:15:27,863 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:27,863 INFO [RS:2;jenkins-hbase4:40191] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40191,1689444902237; zookeeper connection closed. 2023-07-15 18:15:27,863 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): regionserver:40191-0x1016a31dca10003, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:27,863 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3ec5f843] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3ec5f843 2023-07-15 18:15:27,863 INFO [Listener at localhost/40085] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-15 18:15:27,864 WARN [Listener at localhost/40085] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:27,869 INFO [Listener at localhost/40085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:27,972 WARN [BP-670626647-172.31.14.131-1689444896326 heartbeating to localhost/127.0.0.1:44585] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:27,972 WARN [BP-670626647-172.31.14.131-1689444896326 heartbeating to localhost/127.0.0.1:44585] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-670626647-172.31.14.131-1689444896326 (Datanode Uuid 29c7eb7e-84fd-4d72-8500-ffa97dbd968b) service to localhost/127.0.0.1:44585 2023-07-15 18:15:27,973 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data5/current/BP-670626647-172.31.14.131-1689444896326] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:27,974 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data6/current/BP-670626647-172.31.14.131-1689444896326] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:27,976 WARN [Listener at localhost/40085] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:27,984 INFO [Listener at localhost/40085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:28,088 WARN [BP-670626647-172.31.14.131-1689444896326 heartbeating to localhost/127.0.0.1:44585] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:28,088 WARN [BP-670626647-172.31.14.131-1689444896326 heartbeating to localhost/127.0.0.1:44585] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-670626647-172.31.14.131-1689444896326 (Datanode Uuid f09ed7b3-8b8e-4b4f-be93-070d5e76138b) service to localhost/127.0.0.1:44585 2023-07-15 18:15:28,088 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data3/current/BP-670626647-172.31.14.131-1689444896326] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:28,089 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data4/current/BP-670626647-172.31.14.131-1689444896326] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:28,090 WARN [Listener at localhost/40085] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:28,092 INFO [Listener at localhost/40085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:28,196 WARN [BP-670626647-172.31.14.131-1689444896326 heartbeating to localhost/127.0.0.1:44585] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:28,196 WARN [BP-670626647-172.31.14.131-1689444896326 heartbeating to localhost/127.0.0.1:44585] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-670626647-172.31.14.131-1689444896326 (Datanode Uuid 6ac438f9-c133-464b-a936-70d4d7e9dd46) service to localhost/127.0.0.1:44585 2023-07-15 18:15:28,197 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data1/current/BP-670626647-172.31.14.131-1689444896326] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:28,198 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/cluster_1f93952e-8b39-75c4-16ee-41998511542f/dfs/data/data2/current/BP-670626647-172.31.14.131-1689444896326] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:28,231 INFO [Listener at localhost/40085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:28,359 INFO [Listener at localhost/40085] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-15 18:15:28,438 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-15 18:15:28,439 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-15 18:15:28,439 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.log.dir so I do NOT create it in target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07 2023-07-15 18:15:28,439 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ae92658-2834-4ecc-d09d-0cd153f6d4b9/hadoop.tmp.dir so I do NOT create it in target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07 2023-07-15 18:15:28,440 INFO [Listener at localhost/40085] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755, deleteOnExit=true 2023-07-15 18:15:28,440 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-15 18:15:28,440 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/test.cache.data in system properties and HBase conf 2023-07-15 18:15:28,440 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.tmp.dir in system properties and HBase conf 2023-07-15 18:15:28,440 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir in system properties and HBase conf 2023-07-15 18:15:28,441 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-15 18:15:28,441 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-15 18:15:28,441 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-15 18:15:28,441 DEBUG [Listener at localhost/40085] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-15 18:15:28,441 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-15 18:15:28,442 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-15 18:15:28,442 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-15 18:15:28,442 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 18:15:28,442 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-15 18:15:28,443 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-15 18:15:28,443 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 18:15:28,443 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 18:15:28,443 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-15 18:15:28,443 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/nfs.dump.dir in system properties and HBase conf 2023-07-15 18:15:28,443 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/java.io.tmpdir in system properties and HBase conf 2023-07-15 18:15:28,444 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 18:15:28,444 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-15 18:15:28,444 INFO [Listener at localhost/40085] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-15 18:15:28,449 WARN [Listener at localhost/40085] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 18:15:28,450 WARN [Listener at localhost/40085] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 18:15:28,455 DEBUG [Listener at localhost/40085-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1016a31dca1000a, quorum=127.0.0.1:54099, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-15 18:15:28,455 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1016a31dca1000a, quorum=127.0.0.1:54099, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-15 18:15:28,504 WARN [Listener at localhost/40085] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:15:28,507 INFO [Listener at localhost/40085] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:15:28,515 INFO [Listener at localhost/40085] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/java.io.tmpdir/Jetty_localhost_34341_hdfs____.32ip2f/webapp 2023-07-15 18:15:28,621 INFO [Listener at localhost/40085] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34341 2023-07-15 18:15:28,626 WARN [Listener at localhost/40085] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 18:15:28,626 WARN [Listener at localhost/40085] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 18:15:28,687 WARN [Listener at localhost/33611] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:15:28,704 WARN [Listener at localhost/33611] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:15:28,707 WARN [Listener at localhost/33611] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:15:28,708 INFO [Listener at localhost/33611] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:15:28,714 INFO [Listener at localhost/33611] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/java.io.tmpdir/Jetty_localhost_46005_datanode____98yo62/webapp 2023-07-15 18:15:28,822 INFO [Listener at localhost/33611] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46005 2023-07-15 18:15:28,830 WARN [Listener at localhost/37003] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:15:28,871 WARN [Listener at localhost/37003] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:15:28,874 WARN [Listener at localhost/37003] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:15:28,875 INFO [Listener at localhost/37003] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:15:28,879 INFO [Listener at localhost/37003] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/java.io.tmpdir/Jetty_localhost_35363_datanode____tn9awl/webapp 2023-07-15 18:15:28,989 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb5cac4ad02160b38: Processing first storage report for DS-283cc080-bab7-40b7-95fa-0c587e1ec377 from datanode 2fa56dba-debc-4f4f-83f7-904135d525cc 2023-07-15 18:15:28,990 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb5cac4ad02160b38: from storage DS-283cc080-bab7-40b7-95fa-0c587e1ec377 node DatanodeRegistration(127.0.0.1:43533, datanodeUuid=2fa56dba-debc-4f4f-83f7-904135d525cc, infoPort=35597, infoSecurePort=0, ipcPort=37003, storageInfo=lv=-57;cid=testClusterID;nsid=439750242;c=1689444928453), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-15 18:15:28,990 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb5cac4ad02160b38: Processing first storage report for DS-4f806cc0-cba3-471a-a35a-098e90e5e65f from datanode 2fa56dba-debc-4f4f-83f7-904135d525cc 2023-07-15 18:15:28,990 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb5cac4ad02160b38: from storage DS-4f806cc0-cba3-471a-a35a-098e90e5e65f node DatanodeRegistration(127.0.0.1:43533, datanodeUuid=2fa56dba-debc-4f4f-83f7-904135d525cc, infoPort=35597, infoSecurePort=0, ipcPort=37003, storageInfo=lv=-57;cid=testClusterID;nsid=439750242;c=1689444928453), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:28,994 INFO [Listener at localhost/37003] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35363 2023-07-15 18:15:29,001 WARN [Listener at localhost/34307] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:15:29,022 WARN [Listener at localhost/34307] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:15:29,025 WARN [Listener at localhost/34307] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:15:29,027 INFO [Listener at localhost/34307] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:15:29,042 INFO [Listener at localhost/34307] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/java.io.tmpdir/Jetty_localhost_44157_datanode____.fww3q/webapp 2023-07-15 18:15:29,143 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5f4697c3d8240d61: Processing first storage report for DS-a37901ec-5774-4971-8c42-e4d19d276813 from datanode b3ab3c04-0734-4e7e-8cb7-9f28283b0006 2023-07-15 18:15:29,143 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5f4697c3d8240d61: from storage DS-a37901ec-5774-4971-8c42-e4d19d276813 node DatanodeRegistration(127.0.0.1:38669, datanodeUuid=b3ab3c04-0734-4e7e-8cb7-9f28283b0006, infoPort=33929, infoSecurePort=0, ipcPort=34307, storageInfo=lv=-57;cid=testClusterID;nsid=439750242;c=1689444928453), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:29,144 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5f4697c3d8240d61: Processing first storage report for DS-5f71911a-3c80-4871-9c82-d0b3d1225840 from datanode b3ab3c04-0734-4e7e-8cb7-9f28283b0006 2023-07-15 18:15:29,144 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5f4697c3d8240d61: from storage DS-5f71911a-3c80-4871-9c82-d0b3d1225840 node DatanodeRegistration(127.0.0.1:38669, datanodeUuid=b3ab3c04-0734-4e7e-8cb7-9f28283b0006, infoPort=33929, infoSecurePort=0, ipcPort=34307, storageInfo=lv=-57;cid=testClusterID;nsid=439750242;c=1689444928453), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:29,164 INFO [Listener at localhost/34307] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44157 2023-07-15 18:15:29,175 WARN [Listener at localhost/44413] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:15:29,279 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x317614e75172ae11: Processing first storage report for DS-27e5553f-c722-4a45-a2fa-3937435e14a7 from datanode 9df18cd0-931b-45d4-92eb-4d414f73bd61 2023-07-15 18:15:29,279 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x317614e75172ae11: from storage DS-27e5553f-c722-4a45-a2fa-3937435e14a7 node DatanodeRegistration(127.0.0.1:37087, datanodeUuid=9df18cd0-931b-45d4-92eb-4d414f73bd61, infoPort=44231, infoSecurePort=0, ipcPort=44413, storageInfo=lv=-57;cid=testClusterID;nsid=439750242;c=1689444928453), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:29,279 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x317614e75172ae11: Processing first storage report for DS-777105aa-d525-4d5f-bc9e-9cd4da01984d from datanode 9df18cd0-931b-45d4-92eb-4d414f73bd61 2023-07-15 18:15:29,279 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x317614e75172ae11: from storage DS-777105aa-d525-4d5f-bc9e-9cd4da01984d node DatanodeRegistration(127.0.0.1:37087, datanodeUuid=9df18cd0-931b-45d4-92eb-4d414f73bd61, infoPort=44231, infoSecurePort=0, ipcPort=44413, storageInfo=lv=-57;cid=testClusterID;nsid=439750242;c=1689444928453), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:29,293 DEBUG [Listener at localhost/44413] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07 2023-07-15 18:15:29,295 INFO [Listener at localhost/44413] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/zookeeper_0, clientPort=57464, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-15 18:15:29,297 INFO [Listener at localhost/44413] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57464 2023-07-15 18:15:29,297 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,298 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,328 INFO [Listener at localhost/44413] util.FSUtils(471): Created version file at hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06 with version=8 2023-07-15 18:15:29,328 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/hbase-staging 2023-07-15 18:15:29,329 DEBUG [Listener at localhost/44413] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-15 18:15:29,329 DEBUG [Listener at localhost/44413] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-15 18:15:29,329 DEBUG [Listener at localhost/44413] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-15 18:15:29,329 DEBUG [Listener at localhost/44413] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-15 18:15:29,330 INFO [Listener at localhost/44413] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:29,330 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,330 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,331 INFO [Listener at localhost/44413] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:29,331 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,331 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:29,331 INFO [Listener at localhost/44413] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:29,331 INFO [Listener at localhost/44413] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44131 2023-07-15 18:15:29,332 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,333 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,334 INFO [Listener at localhost/44413] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44131 connecting to ZooKeeper ensemble=127.0.0.1:57464 2023-07-15 18:15:29,342 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:441310x0, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:29,342 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44131-0x1016a3252120000 connected 2023-07-15 18:15:29,359 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:29,359 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:29,360 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:29,360 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44131 2023-07-15 18:15:29,360 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44131 2023-07-15 18:15:29,362 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44131 2023-07-15 18:15:29,366 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44131 2023-07-15 18:15:29,367 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44131 2023-07-15 18:15:29,369 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:29,369 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:29,369 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:29,370 INFO [Listener at localhost/44413] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-15 18:15:29,370 INFO [Listener at localhost/44413] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:29,370 INFO [Listener at localhost/44413] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:29,370 INFO [Listener at localhost/44413] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:29,370 INFO [Listener at localhost/44413] http.HttpServer(1146): Jetty bound to port 36923 2023-07-15 18:15:29,371 INFO [Listener at localhost/44413] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:29,375 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,376 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f26d369{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:29,377 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,377 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@584176ac{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:29,387 INFO [Listener at localhost/44413] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:29,388 INFO [Listener at localhost/44413] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:29,389 INFO [Listener at localhost/44413] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:29,389 INFO [Listener at localhost/44413] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 18:15:29,390 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,392 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@d183284{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-15 18:15:29,393 INFO [Listener at localhost/44413] server.AbstractConnector(333): Started ServerConnector@298b6695{HTTP/1.1, (http/1.1)}{0.0.0.0:36923} 2023-07-15 18:15:29,394 INFO [Listener at localhost/44413] server.Server(415): Started @35189ms 2023-07-15 18:15:29,394 INFO [Listener at localhost/44413] master.HMaster(444): hbase.rootdir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06, hbase.cluster.distributed=false 2023-07-15 18:15:29,418 INFO [Listener at localhost/44413] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:29,418 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,418 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,418 INFO [Listener at localhost/44413] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:29,418 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,418 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:29,419 INFO [Listener at localhost/44413] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:29,419 INFO [Listener at localhost/44413] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42523 2023-07-15 18:15:29,420 INFO [Listener at localhost/44413] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:29,422 DEBUG [Listener at localhost/44413] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:29,423 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,425 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,426 INFO [Listener at localhost/44413] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42523 connecting to ZooKeeper ensemble=127.0.0.1:57464 2023-07-15 18:15:29,429 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:425230x0, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:29,431 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42523-0x1016a3252120001 connected 2023-07-15 18:15:29,431 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:29,432 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:29,432 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:29,437 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42523 2023-07-15 18:15:29,437 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42523 2023-07-15 18:15:29,437 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42523 2023-07-15 18:15:29,443 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42523 2023-07-15 18:15:29,443 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42523 2023-07-15 18:15:29,446 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:29,446 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:29,446 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:29,447 INFO [Listener at localhost/44413] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:29,447 INFO [Listener at localhost/44413] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:29,447 INFO [Listener at localhost/44413] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:29,448 INFO [Listener at localhost/44413] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:29,449 INFO [Listener at localhost/44413] http.HttpServer(1146): Jetty bound to port 45077 2023-07-15 18:15:29,449 INFO [Listener at localhost/44413] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:29,462 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,462 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3acbfa48{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:29,463 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,463 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fd921dc{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:29,469 INFO [Listener at localhost/44413] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:29,469 INFO [Listener at localhost/44413] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:29,470 INFO [Listener at localhost/44413] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:29,470 INFO [Listener at localhost/44413] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 18:15:29,471 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,472 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@e86723d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:29,473 INFO [Listener at localhost/44413] server.AbstractConnector(333): Started ServerConnector@32d4c09f{HTTP/1.1, (http/1.1)}{0.0.0.0:45077} 2023-07-15 18:15:29,473 INFO [Listener at localhost/44413] server.Server(415): Started @35269ms 2023-07-15 18:15:29,490 INFO [Listener at localhost/44413] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:29,491 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,491 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,491 INFO [Listener at localhost/44413] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:29,491 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,491 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:29,491 INFO [Listener at localhost/44413] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:29,493 INFO [Listener at localhost/44413] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43891 2023-07-15 18:15:29,493 INFO [Listener at localhost/44413] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:29,497 DEBUG [Listener at localhost/44413] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:29,498 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,499 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,500 INFO [Listener at localhost/44413] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43891 connecting to ZooKeeper ensemble=127.0.0.1:57464 2023-07-15 18:15:29,504 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:438910x0, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:29,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43891-0x1016a3252120002 connected 2023-07-15 18:15:29,506 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:29,506 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:29,507 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:29,507 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43891 2023-07-15 18:15:29,508 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43891 2023-07-15 18:15:29,508 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43891 2023-07-15 18:15:29,508 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43891 2023-07-15 18:15:29,508 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43891 2023-07-15 18:15:29,510 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:29,510 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:29,510 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:29,511 INFO [Listener at localhost/44413] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:29,511 INFO [Listener at localhost/44413] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:29,511 INFO [Listener at localhost/44413] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:29,511 INFO [Listener at localhost/44413] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:29,512 INFO [Listener at localhost/44413] http.HttpServer(1146): Jetty bound to port 46443 2023-07-15 18:15:29,512 INFO [Listener at localhost/44413] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:29,513 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,513 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@63baada7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:29,513 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,514 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ffc762d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:29,518 INFO [Listener at localhost/44413] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:29,519 INFO [Listener at localhost/44413] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:29,519 INFO [Listener at localhost/44413] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:29,519 INFO [Listener at localhost/44413] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 18:15:29,520 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,521 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5eb598c2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:29,522 INFO [Listener at localhost/44413] server.AbstractConnector(333): Started ServerConnector@6b2894ff{HTTP/1.1, (http/1.1)}{0.0.0.0:46443} 2023-07-15 18:15:29,522 INFO [Listener at localhost/44413] server.Server(415): Started @35317ms 2023-07-15 18:15:29,533 INFO [Listener at localhost/44413] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:29,533 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,533 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,533 INFO [Listener at localhost/44413] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:29,533 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:29,533 INFO [Listener at localhost/44413] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:29,533 INFO [Listener at localhost/44413] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:29,534 INFO [Listener at localhost/44413] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42683 2023-07-15 18:15:29,534 INFO [Listener at localhost/44413] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:29,536 DEBUG [Listener at localhost/44413] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:29,536 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,537 INFO [Listener at localhost/44413] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,538 INFO [Listener at localhost/44413] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42683 connecting to ZooKeeper ensemble=127.0.0.1:57464 2023-07-15 18:15:29,542 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:426830x0, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:29,543 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42683-0x1016a3252120003 connected 2023-07-15 18:15:29,543 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:29,544 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:29,544 DEBUG [Listener at localhost/44413] zookeeper.ZKUtil(164): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:29,545 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42683 2023-07-15 18:15:29,545 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42683 2023-07-15 18:15:29,545 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42683 2023-07-15 18:15:29,545 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42683 2023-07-15 18:15:29,546 DEBUG [Listener at localhost/44413] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42683 2023-07-15 18:15:29,548 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:29,548 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:29,548 INFO [Listener at localhost/44413] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:29,549 INFO [Listener at localhost/44413] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:29,549 INFO [Listener at localhost/44413] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:29,549 INFO [Listener at localhost/44413] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:29,549 INFO [Listener at localhost/44413] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:29,549 INFO [Listener at localhost/44413] http.HttpServer(1146): Jetty bound to port 36139 2023-07-15 18:15:29,550 INFO [Listener at localhost/44413] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:29,551 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,551 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@fa12fc1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:29,551 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,551 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64e8e413{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:29,556 INFO [Listener at localhost/44413] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:29,557 INFO [Listener at localhost/44413] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:29,557 INFO [Listener at localhost/44413] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:29,558 INFO [Listener at localhost/44413] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 18:15:29,559 INFO [Listener at localhost/44413] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:29,560 INFO [Listener at localhost/44413] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1322d8e2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:29,561 INFO [Listener at localhost/44413] server.AbstractConnector(333): Started ServerConnector@6f673757{HTTP/1.1, (http/1.1)}{0.0.0.0:36139} 2023-07-15 18:15:29,561 INFO [Listener at localhost/44413] server.Server(415): Started @35357ms 2023-07-15 18:15:29,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:29,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7ced8210{HTTP/1.1, (http/1.1)}{0.0.0.0:45911} 2023-07-15 18:15:29,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @35367ms 2023-07-15 18:15:29,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:29,573 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 18:15:29,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:29,574 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:29,574 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:29,575 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:29,576 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:29,576 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:29,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 18:15:29,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44131,1689444929330 from backup master directory 2023-07-15 18:15:29,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 18:15:29,580 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:29,580 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 18:15:29,580 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:29,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:29,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/hbase.id with ID: 720c1026-6696-4047-92ac-0e7dd5e8ce7a 2023-07-15 18:15:29,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:29,620 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:29,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x39db3047 to 127.0.0.1:57464 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:29,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@364d0bbe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:29,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:29,635 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-15 18:15:29,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:29,640 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store-tmp 2023-07-15 18:15:29,654 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:29,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 18:15:29,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:29,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:29,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 18:15:29,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:29,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:29,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:29,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/WALs/jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:29,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44131%2C1689444929330, suffix=, logDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/WALs/jenkins-hbase4.apache.org,44131,1689444929330, archiveDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/oldWALs, maxLogs=10 2023-07-15 18:15:29,674 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK] 2023-07-15 18:15:29,674 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK] 2023-07-15 18:15:29,675 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK] 2023-07-15 18:15:29,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/WALs/jenkins-hbase4.apache.org,44131,1689444929330/jenkins-hbase4.apache.org%2C44131%2C1689444929330.1689444929659 2023-07-15 18:15:29,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK], DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK]] 2023-07-15 18:15:29,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:29,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:29,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:29,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:29,685 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:29,687 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-15 18:15:29,688 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-15 18:15:29,689 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:29,689 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:29,690 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:29,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:29,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:29,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9770082880, jitterRate=-0.09009012579917908}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:29,696 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:29,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-15 18:15:29,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-15 18:15:29,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-15 18:15:29,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-15 18:15:29,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-15 18:15:29,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-15 18:15:29,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-15 18:15:29,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-15 18:15:29,712 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-15 18:15:29,713 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-15 18:15:29,713 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-15 18:15:29,713 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-15 18:15:29,715 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:29,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-15 18:15:29,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-15 18:15:29,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-15 18:15:29,718 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:29,718 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:29,718 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:29,718 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:29,718 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:29,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44131,1689444929330, sessionid=0x1016a3252120000, setting cluster-up flag (Was=false) 2023-07-15 18:15:29,723 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:29,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-15 18:15:29,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:29,732 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:29,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-15 18:15:29,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:29,738 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.hbase-snapshot/.tmp 2023-07-15 18:15:29,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-15 18:15:29,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-15 18:15:29,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-15 18:15:29,741 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:29,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-15 18:15:29,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-15 18:15:29,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:29,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 18:15:29,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 18:15:29,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 18:15:29,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 18:15:29,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:29,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:29,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:29,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:29,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-15 18:15:29,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:29,754 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689444959755 2023-07-15 18:15:29,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-15 18:15:29,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-15 18:15:29,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-15 18:15:29,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-15 18:15:29,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-15 18:15:29,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-15 18:15:29,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,756 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:29,756 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-15 18:15:29,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-15 18:15:29,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-15 18:15:29,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-15 18:15:29,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-15 18:15:29,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-15 18:15:29,757 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444929757,5,FailOnTimeoutGroup] 2023-07-15 18:15:29,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444929758,5,FailOnTimeoutGroup] 2023-07-15 18:15:29,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-15 18:15:29,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,758 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:29,764 INFO [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(951): ClusterId : 720c1026-6696-4047-92ac-0e7dd5e8ce7a 2023-07-15 18:15:29,764 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(951): ClusterId : 720c1026-6696-4047-92ac-0e7dd5e8ce7a 2023-07-15 18:15:29,767 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(951): ClusterId : 720c1026-6696-4047-92ac-0e7dd5e8ce7a 2023-07-15 18:15:29,767 DEBUG [RS:0;jenkins-hbase4:42523] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:29,767 DEBUG [RS:1;jenkins-hbase4:43891] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:29,767 DEBUG [RS:2;jenkins-hbase4:42683] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:29,775 DEBUG [RS:0;jenkins-hbase4:42523] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:29,775 DEBUG [RS:0;jenkins-hbase4:42523] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:29,775 DEBUG [RS:1;jenkins-hbase4:43891] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:29,775 DEBUG [RS:1;jenkins-hbase4:43891] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:29,775 DEBUG [RS:2;jenkins-hbase4:42683] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:29,776 DEBUG [RS:2;jenkins-hbase4:42683] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:29,782 DEBUG [RS:0;jenkins-hbase4:42523] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:29,782 DEBUG [RS:1;jenkins-hbase4:43891] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:29,782 DEBUG [RS:2;jenkins-hbase4:42683] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:29,785 DEBUG [RS:2;jenkins-hbase4:42683] zookeeper.ReadOnlyZKClient(139): Connect 0x2ca8e561 to 127.0.0.1:57464 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:29,785 DEBUG [RS:0;jenkins-hbase4:42523] zookeeper.ReadOnlyZKClient(139): Connect 0x40bc7536 to 127.0.0.1:57464 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:29,785 DEBUG [RS:1;jenkins-hbase4:43891] zookeeper.ReadOnlyZKClient(139): Connect 0x5177f365 to 127.0.0.1:57464 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:29,799 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:29,799 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:29,799 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06 2023-07-15 18:15:29,801 DEBUG [RS:0;jenkins-hbase4:42523] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e7b8351, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:29,801 DEBUG [RS:0;jenkins-hbase4:42523] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a441e46, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:29,801 DEBUG [RS:2;jenkins-hbase4:42683] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26508e11, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:29,802 DEBUG [RS:2;jenkins-hbase4:42683] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@635204bf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:29,803 DEBUG [RS:1;jenkins-hbase4:43891] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5fb0173b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:29,803 DEBUG [RS:1;jenkins-hbase4:43891] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b8f9815, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:29,817 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42523 2023-07-15 18:15:29,817 INFO [RS:0;jenkins-hbase4:42523] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:29,817 INFO [RS:0;jenkins-hbase4:42523] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:29,817 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:29,817 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:42683 2023-07-15 18:15:29,817 INFO [RS:2;jenkins-hbase4:42683] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:29,817 INFO [RS:2;jenkins-hbase4:42683] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:29,817 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:29,817 DEBUG [RS:1;jenkins-hbase4:43891] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:43891 2023-07-15 18:15:29,817 INFO [RS:1;jenkins-hbase4:43891] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:29,817 INFO [RS:1;jenkins-hbase4:43891] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:29,818 DEBUG [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:29,818 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44131,1689444929330 with isa=jenkins-hbase4.apache.org/172.31.14.131:42523, startcode=1689444929417 2023-07-15 18:15:29,818 DEBUG [RS:0;jenkins-hbase4:42523] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:29,819 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44131,1689444929330 with isa=jenkins-hbase4.apache.org/172.31.14.131:42683, startcode=1689444929532 2023-07-15 18:15:29,819 DEBUG [RS:2;jenkins-hbase4:42683] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:29,819 INFO [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44131,1689444929330 with isa=jenkins-hbase4.apache.org/172.31.14.131:43891, startcode=1689444929490 2023-07-15 18:15:29,819 DEBUG [RS:1;jenkins-hbase4:43891] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:29,830 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36229, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:29,830 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59839, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:29,832 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44131] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:29,832 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:29,832 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44131] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:29,832 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06 2023-07-15 18:15:29,832 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33611 2023-07-15 18:15:29,832 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36923 2023-07-15 18:15:29,832 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45515, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:29,832 DEBUG [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06 2023-07-15 18:15:29,833 DEBUG [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33611 2023-07-15 18:15:29,833 DEBUG [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36923 2023-07-15 18:15:29,833 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44131] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:29,833 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06 2023-07-15 18:15:29,833 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33611 2023-07-15 18:15:29,833 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36923 2023-07-15 18:15:29,835 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-15 18:15:29,835 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:29,835 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-15 18:15:29,838 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:29,839 DEBUG [RS:1;jenkins-hbase4:43891] zookeeper.ZKUtil(162): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:29,839 DEBUG [RS:0;jenkins-hbase4:42523] zookeeper.ZKUtil(162): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:29,839 WARN [RS:1;jenkins-hbase4:43891] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:29,839 WARN [RS:0;jenkins-hbase4:42523] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:29,840 INFO [RS:1;jenkins-hbase4:43891] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:29,840 INFO [RS:0;jenkins-hbase4:42523] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:29,840 DEBUG [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:29,840 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:29,840 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43891,1689444929490] 2023-07-15 18:15:29,840 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42523,1689444929417] 2023-07-15 18:15:29,840 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42683,1689444929532] 2023-07-15 18:15:29,841 DEBUG [RS:2;jenkins-hbase4:42683] zookeeper.ZKUtil(162): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:29,841 WARN [RS:2;jenkins-hbase4:42683] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:29,841 INFO [RS:2;jenkins-hbase4:42683] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:29,841 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:29,849 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:29,850 DEBUG [RS:1;jenkins-hbase4:43891] zookeeper.ZKUtil(162): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:29,851 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 18:15:29,851 DEBUG [RS:0;jenkins-hbase4:42523] zookeeper.ZKUtil(162): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:29,851 DEBUG [RS:1;jenkins-hbase4:43891] zookeeper.ZKUtil(162): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:29,851 DEBUG [RS:2;jenkins-hbase4:42683] zookeeper.ZKUtil(162): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:29,852 DEBUG [RS:1;jenkins-hbase4:43891] zookeeper.ZKUtil(162): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:29,852 DEBUG [RS:2;jenkins-hbase4:42683] zookeeper.ZKUtil(162): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:29,852 DEBUG [RS:0;jenkins-hbase4:42523] zookeeper.ZKUtil(162): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:29,852 DEBUG [RS:0;jenkins-hbase4:42523] zookeeper.ZKUtil(162): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:29,852 DEBUG [RS:2;jenkins-hbase4:42683] zookeeper.ZKUtil(162): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:29,852 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/info 2023-07-15 18:15:29,853 DEBUG [RS:1;jenkins-hbase4:43891] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:29,853 INFO [RS:1;jenkins-hbase4:43891] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:29,853 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 18:15:29,853 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:29,853 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:29,854 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:29,854 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 18:15:29,854 INFO [RS:2;jenkins-hbase4:42683] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:29,854 INFO [RS:0;jenkins-hbase4:42523] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:29,855 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:29,856 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 18:15:29,856 INFO [RS:1;jenkins-hbase4:43891] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:29,856 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:29,857 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 18:15:29,857 INFO [RS:2;jenkins-hbase4:42683] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:29,857 INFO [RS:0;jenkins-hbase4:42523] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:29,857 INFO [RS:1;jenkins-hbase4:43891] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:29,857 INFO [RS:2;jenkins-hbase4:42683] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:29,857 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,857 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,858 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/table 2023-07-15 18:15:29,859 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 18:15:29,859 INFO [RS:0;jenkins-hbase4:42523] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:29,859 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,859 INFO [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:29,859 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:29,860 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:29,860 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:29,862 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,862 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,863 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740 2023-07-15 18:15:29,863 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,863 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,863 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,863 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740 2023-07-15 18:15:29,864 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:29,864 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,864 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:29,865 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:29,865 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:1;jenkins-hbase4:43891] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:2;jenkins-hbase4:42683] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,865 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,866 DEBUG [RS:0;jenkins-hbase4:42523] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:29,866 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,866 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,866 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,866 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,868 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,868 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,868 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,868 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,868 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 18:15:29,870 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 18:15:29,874 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,874 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,874 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,874 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,877 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:29,878 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10369745920, jitterRate=-0.03424215316772461}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 18:15:29,878 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 18:15:29,878 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 18:15:29,878 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 18:15:29,878 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 18:15:29,878 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 18:15:29,878 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 18:15:29,879 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:29,879 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 18:15:29,881 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:29,881 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-15 18:15:29,881 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-15 18:15:29,884 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-15 18:15:29,885 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-15 18:15:29,886 INFO [RS:1;jenkins-hbase4:43891] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:29,887 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43891,1689444929490-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,895 INFO [RS:0;jenkins-hbase4:42523] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:29,895 INFO [RS:2;jenkins-hbase4:42683] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:29,895 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42523,1689444929417-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,895 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42683,1689444929532-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,898 INFO [RS:1;jenkins-hbase4:43891] regionserver.Replication(203): jenkins-hbase4.apache.org,43891,1689444929490 started 2023-07-15 18:15:29,898 INFO [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43891,1689444929490, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43891, sessionid=0x1016a3252120002 2023-07-15 18:15:29,898 DEBUG [RS:1;jenkins-hbase4:43891] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:29,898 DEBUG [RS:1;jenkins-hbase4:43891] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:29,898 DEBUG [RS:1;jenkins-hbase4:43891] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43891,1689444929490' 2023-07-15 18:15:29,899 DEBUG [RS:1;jenkins-hbase4:43891] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:29,899 DEBUG [RS:1;jenkins-hbase4:43891] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:29,899 DEBUG [RS:1;jenkins-hbase4:43891] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:29,899 DEBUG [RS:1;jenkins-hbase4:43891] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:29,899 DEBUG [RS:1;jenkins-hbase4:43891] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:29,899 DEBUG [RS:1;jenkins-hbase4:43891] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43891,1689444929490' 2023-07-15 18:15:29,899 DEBUG [RS:1;jenkins-hbase4:43891] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:29,900 DEBUG [RS:1;jenkins-hbase4:43891] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:29,900 DEBUG [RS:1;jenkins-hbase4:43891] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:29,900 INFO [RS:1;jenkins-hbase4:43891] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-15 18:15:29,902 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,903 DEBUG [RS:1;jenkins-hbase4:43891] zookeeper.ZKUtil(398): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-15 18:15:29,903 INFO [RS:1;jenkins-hbase4:43891] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-15 18:15:29,903 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,904 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,911 INFO [RS:0;jenkins-hbase4:42523] regionserver.Replication(203): jenkins-hbase4.apache.org,42523,1689444929417 started 2023-07-15 18:15:29,911 INFO [RS:2;jenkins-hbase4:42683] regionserver.Replication(203): jenkins-hbase4.apache.org,42683,1689444929532 started 2023-07-15 18:15:29,911 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42523,1689444929417, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42523, sessionid=0x1016a3252120001 2023-07-15 18:15:29,912 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42683,1689444929532, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42683, sessionid=0x1016a3252120003 2023-07-15 18:15:29,912 DEBUG [RS:0;jenkins-hbase4:42523] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:29,912 DEBUG [RS:2;jenkins-hbase4:42683] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:29,912 DEBUG [RS:2;jenkins-hbase4:42683] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:29,912 DEBUG [RS:2;jenkins-hbase4:42683] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42683,1689444929532' 2023-07-15 18:15:29,912 DEBUG [RS:2;jenkins-hbase4:42683] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:29,912 DEBUG [RS:0;jenkins-hbase4:42523] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:29,912 DEBUG [RS:0;jenkins-hbase4:42523] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42523,1689444929417' 2023-07-15 18:15:29,912 DEBUG [RS:0;jenkins-hbase4:42523] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:29,912 DEBUG [RS:0;jenkins-hbase4:42523] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:29,912 DEBUG [RS:2;jenkins-hbase4:42683] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:29,913 DEBUG [RS:0;jenkins-hbase4:42523] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:29,913 DEBUG [RS:2;jenkins-hbase4:42683] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:29,913 DEBUG [RS:2;jenkins-hbase4:42683] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:29,913 DEBUG [RS:0;jenkins-hbase4:42523] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:29,913 DEBUG [RS:0;jenkins-hbase4:42523] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:29,913 DEBUG [RS:0;jenkins-hbase4:42523] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42523,1689444929417' 2023-07-15 18:15:29,913 DEBUG [RS:0;jenkins-hbase4:42523] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:29,913 DEBUG [RS:2;jenkins-hbase4:42683] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:29,913 DEBUG [RS:2;jenkins-hbase4:42683] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42683,1689444929532' 2023-07-15 18:15:29,913 DEBUG [RS:2;jenkins-hbase4:42683] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:29,913 DEBUG [RS:0;jenkins-hbase4:42523] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:29,913 DEBUG [RS:2;jenkins-hbase4:42683] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:29,913 DEBUG [RS:2;jenkins-hbase4:42683] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:29,913 DEBUG [RS:0;jenkins-hbase4:42523] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:29,913 INFO [RS:2;jenkins-hbase4:42683] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-15 18:15:29,913 INFO [RS:0;jenkins-hbase4:42523] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-15 18:15:29,914 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,914 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,914 DEBUG [RS:0;jenkins-hbase4:42523] zookeeper.ZKUtil(398): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-15 18:15:29,914 DEBUG [RS:2;jenkins-hbase4:42683] zookeeper.ZKUtil(398): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-15 18:15:29,914 INFO [RS:0;jenkins-hbase4:42523] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-15 18:15:29,914 INFO [RS:2;jenkins-hbase4:42683] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-15 18:15:29,914 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,914 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,914 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:29,914 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:30,007 INFO [RS:1;jenkins-hbase4:43891] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43891%2C1689444929490, suffix=, logDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,43891,1689444929490, archiveDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/oldWALs, maxLogs=32 2023-07-15 18:15:30,018 INFO [RS:0;jenkins-hbase4:42523] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42523%2C1689444929417, suffix=, logDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,42523,1689444929417, archiveDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/oldWALs, maxLogs=32 2023-07-15 18:15:30,018 INFO [RS:2;jenkins-hbase4:42683] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42683%2C1689444929532, suffix=, logDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,42683,1689444929532, archiveDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/oldWALs, maxLogs=32 2023-07-15 18:15:30,029 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK] 2023-07-15 18:15:30,030 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK] 2023-07-15 18:15:30,029 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK] 2023-07-15 18:15:30,036 DEBUG [jenkins-hbase4:44131] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-15 18:15:30,036 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:30,036 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:30,036 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:30,036 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:30,036 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:30,037 INFO [RS:1;jenkins-hbase4:43891] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,43891,1689444929490/jenkins-hbase4.apache.org%2C43891%2C1689444929490.1689444930008 2023-07-15 18:15:30,039 DEBUG [RS:1;jenkins-hbase4:43891] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK], DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK]] 2023-07-15 18:15:30,040 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42683,1689444929532, state=OPENING 2023-07-15 18:15:30,042 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-15 18:15:30,045 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:30,045 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK] 2023-07-15 18:15:30,045 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK] 2023-07-15 18:15:30,046 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK] 2023-07-15 18:15:30,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42683,1689444929532}] 2023-07-15 18:15:30,046 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:30,047 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK] 2023-07-15 18:15:30,048 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK] 2023-07-15 18:15:30,048 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK] 2023-07-15 18:15:30,049 WARN [ReadOnlyZKClient-127.0.0.1:57464@0x39db3047] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-15 18:15:30,049 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:30,051 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:30,055 INFO [RS:0;jenkins-hbase4:42523] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,42523,1689444929417/jenkins-hbase4.apache.org%2C42523%2C1689444929417.1689444930023 2023-07-15 18:15:30,055 INFO [RS:2;jenkins-hbase4:42683] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,42683,1689444929532/jenkins-hbase4.apache.org%2C42683%2C1689444929532.1689444930026 2023-07-15 18:15:30,056 DEBUG [RS:0;jenkins-hbase4:42523] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK], DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK]] 2023-07-15 18:15:30,056 DEBUG [RS:2;jenkins-hbase4:42683] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK], DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK], DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK]] 2023-07-15 18:15:30,056 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42683] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:60340 deadline: 1689444990051, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:30,203 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:30,205 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:30,206 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60356, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:30,214 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 18:15:30,215 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:30,216 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42683%2C1689444929532.meta, suffix=.meta, logDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,42683,1689444929532, archiveDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/oldWALs, maxLogs=32 2023-07-15 18:15:30,253 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK] 2023-07-15 18:15:30,254 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK] 2023-07-15 18:15:30,253 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK] 2023-07-15 18:15:30,261 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/WALs/jenkins-hbase4.apache.org,42683,1689444929532/jenkins-hbase4.apache.org%2C42683%2C1689444929532.meta.1689444930217.meta 2023-07-15 18:15:30,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37087,DS-27e5553f-c722-4a45-a2fa-3937435e14a7,DISK], DatanodeInfoWithStorage[127.0.0.1:38669,DS-a37901ec-5774-4971-8c42-e4d19d276813,DISK], DatanodeInfoWithStorage[127.0.0.1:43533,DS-283cc080-bab7-40b7-95fa-0c587e1ec377,DISK]] 2023-07-15 18:15:30,261 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:30,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 18:15:30,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 18:15:30,262 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 18:15:30,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 18:15:30,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:30,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 18:15:30,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 18:15:30,264 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 18:15:30,265 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/info 2023-07-15 18:15:30,266 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/info 2023-07-15 18:15:30,266 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 18:15:30,267 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:30,267 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 18:15:30,268 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:30,268 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:30,269 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 18:15:30,269 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:30,270 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 18:15:30,270 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/table 2023-07-15 18:15:30,270 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/table 2023-07-15 18:15:30,271 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 18:15:30,271 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:30,272 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740 2023-07-15 18:15:30,273 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740 2023-07-15 18:15:30,276 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 18:15:30,278 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 18:15:30,279 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10101139520, jitterRate=-0.059258073568344116}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 18:15:30,279 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 18:15:30,280 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689444930203 2023-07-15 18:15:30,284 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 18:15:30,284 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 18:15:30,285 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42683,1689444929532, state=OPEN 2023-07-15 18:15:30,286 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 18:15:30,286 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:30,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-15 18:15:30,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42683,1689444929532 in 240 msec 2023-07-15 18:15:30,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-15 18:15:30,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 407 msec 2023-07-15 18:15:30,291 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 549 msec 2023-07-15 18:15:30,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689444930291, completionTime=-1 2023-07-15 18:15:30,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-15 18:15:30,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-15 18:15:30,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-15 18:15:30,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689444990296 2023-07-15 18:15:30,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689445050296 2023-07-15 18:15:30,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-15 18:15:30,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689444929330-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:30,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689444929330-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:30,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689444929330-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:30,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44131, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:30,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:30,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-15 18:15:30,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:30,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-15 18:15:30,304 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-15 18:15:30,305 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:30,306 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:30,307 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,308 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90 empty. 2023-07-15 18:15:30,308 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,308 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-15 18:15:30,320 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:30,322 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => fb66e1f4e78f045b48bf11a66e12cb90, NAME => 'hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp 2023-07-15 18:15:30,333 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:30,333 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing fb66e1f4e78f045b48bf11a66e12cb90, disabling compactions & flushes 2023-07-15 18:15:30,333 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:30,333 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:30,334 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. after waiting 0 ms 2023-07-15 18:15:30,334 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:30,381 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:30,381 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for fb66e1f4e78f045b48bf11a66e12cb90: 2023-07-15 18:15:30,387 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:30,389 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-15 18:15:30,390 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:30,390 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:30,391 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444930391"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444930391"}]},"ts":"1689444930391"} 2023-07-15 18:15:30,391 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:30,393 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,393 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd empty. 2023-07-15 18:15:30,394 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,394 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-15 18:15:30,395 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:30,397 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:30,397 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444930397"}]},"ts":"1689444930397"} 2023-07-15 18:15:30,398 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-15 18:15:30,401 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:30,402 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:30,402 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:30,402 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:30,402 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:30,402 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fb66e1f4e78f045b48bf11a66e12cb90, ASSIGN}] 2023-07-15 18:15:30,407 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fb66e1f4e78f045b48bf11a66e12cb90, ASSIGN 2023-07-15 18:15:30,411 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fb66e1f4e78f045b48bf11a66e12cb90, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42683,1689444929532; forceNewPlan=false, retain=false 2023-07-15 18:15:30,421 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:30,422 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 79dfe318ca9b6da52ea91d794974bcfd, NAME => 'hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp 2023-07-15 18:15:30,429 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-15 18:15:30,429 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-15 18:15:30,438 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:30,438 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 79dfe318ca9b6da52ea91d794974bcfd, disabling compactions & flushes 2023-07-15 18:15:30,438 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:30,438 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:30,438 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. after waiting 0 ms 2023-07-15 18:15:30,438 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:30,438 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:30,438 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 79dfe318ca9b6da52ea91d794974bcfd: 2023-07-15 18:15:30,441 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:30,442 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444930441"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444930441"}]},"ts":"1689444930441"} 2023-07-15 18:15:30,443 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:30,443 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:30,444 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444930444"}]},"ts":"1689444930444"} 2023-07-15 18:15:30,445 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-15 18:15:30,449 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:30,449 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:30,449 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:30,449 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:30,449 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:30,449 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=79dfe318ca9b6da52ea91d794974bcfd, ASSIGN}] 2023-07-15 18:15:30,451 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=79dfe318ca9b6da52ea91d794974bcfd, ASSIGN 2023-07-15 18:15:30,452 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=79dfe318ca9b6da52ea91d794974bcfd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42523,1689444929417; forceNewPlan=false, retain=false 2023-07-15 18:15:30,452 INFO [jenkins-hbase4:44131] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-15 18:15:30,454 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=fb66e1f4e78f045b48bf11a66e12cb90, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:30,454 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444930454"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444930454"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444930454"}]},"ts":"1689444930454"} 2023-07-15 18:15:30,454 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=79dfe318ca9b6da52ea91d794974bcfd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:30,454 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444930454"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444930454"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444930454"}]},"ts":"1689444930454"} 2023-07-15 18:15:30,457 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure fb66e1f4e78f045b48bf11a66e12cb90, server=jenkins-hbase4.apache.org,42683,1689444929532}] 2023-07-15 18:15:30,458 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 79dfe318ca9b6da52ea91d794974bcfd, server=jenkins-hbase4.apache.org,42523,1689444929417}] 2023-07-15 18:15:30,611 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:30,611 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:30,613 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42312, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:30,615 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:30,616 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fb66e1f4e78f045b48bf11a66e12cb90, NAME => 'hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:30,616 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,617 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:30,617 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,617 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,618 INFO [StoreOpener-fb66e1f4e78f045b48bf11a66e12cb90-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,618 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:30,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 79dfe318ca9b6da52ea91d794974bcfd, NAME => 'hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:30,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 18:15:30,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. service=MultiRowMutationService 2023-07-15 18:15:30,619 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-15 18:15:30,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:30,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,619 DEBUG [StoreOpener-fb66e1f4e78f045b48bf11a66e12cb90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90/info 2023-07-15 18:15:30,620 DEBUG [StoreOpener-fb66e1f4e78f045b48bf11a66e12cb90-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90/info 2023-07-15 18:15:30,620 INFO [StoreOpener-fb66e1f4e78f045b48bf11a66e12cb90-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fb66e1f4e78f045b48bf11a66e12cb90 columnFamilyName info 2023-07-15 18:15:30,620 INFO [StoreOpener-fb66e1f4e78f045b48bf11a66e12cb90-1] regionserver.HStore(310): Store=fb66e1f4e78f045b48bf11a66e12cb90/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:30,621 INFO [StoreOpener-79dfe318ca9b6da52ea91d794974bcfd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,625 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:30,627 DEBUG [StoreOpener-79dfe318ca9b6da52ea91d794974bcfd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd/m 2023-07-15 18:15:30,627 DEBUG [StoreOpener-79dfe318ca9b6da52ea91d794974bcfd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd/m 2023-07-15 18:15:30,627 INFO [StoreOpener-79dfe318ca9b6da52ea91d794974bcfd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 79dfe318ca9b6da52ea91d794974bcfd columnFamilyName m 2023-07-15 18:15:30,628 INFO [StoreOpener-79dfe318ca9b6da52ea91d794974bcfd-1] regionserver.HStore(310): Store=79dfe318ca9b6da52ea91d794974bcfd/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:30,628 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:30,629 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fb66e1f4e78f045b48bf11a66e12cb90; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11435321760, jitterRate=0.06499733030796051}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:30,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fb66e1f4e78f045b48bf11a66e12cb90: 2023-07-15 18:15:30,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,629 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90., pid=8, masterSystemTime=1689444930611 2023-07-15 18:15:30,632 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:30,632 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:30,633 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=fb66e1f4e78f045b48bf11a66e12cb90, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:30,633 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444930633"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444930633"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444930633"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444930633"}]},"ts":"1689444930633"} 2023-07-15 18:15:30,634 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:30,638 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-15 18:15:30,638 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure fb66e1f4e78f045b48bf11a66e12cb90, server=jenkins-hbase4.apache.org,42683,1689444929532 in 179 msec 2023-07-15 18:15:30,638 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:30,639 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 79dfe318ca9b6da52ea91d794974bcfd; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4fbde7dd, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:30,639 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 79dfe318ca9b6da52ea91d794974bcfd: 2023-07-15 18:15:30,639 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd., pid=9, masterSystemTime=1689444930611 2023-07-15 18:15:30,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-15 18:15:30,642 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fb66e1f4e78f045b48bf11a66e12cb90, ASSIGN in 236 msec 2023-07-15 18:15:30,643 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:30,643 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:30,645 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:30,645 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=79dfe318ca9b6da52ea91d794974bcfd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:30,645 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444930645"}]},"ts":"1689444930645"} 2023-07-15 18:15:30,645 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444930645"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444930645"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444930645"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444930645"}]},"ts":"1689444930645"} 2023-07-15 18:15:30,646 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-15 18:15:30,648 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-15 18:15:30,648 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 79dfe318ca9b6da52ea91d794974bcfd, server=jenkins-hbase4.apache.org,42523,1689444929417 in 189 msec 2023-07-15 18:15:30,649 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:30,649 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-15 18:15:30,650 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=79dfe318ca9b6da52ea91d794974bcfd, ASSIGN in 199 msec 2023-07-15 18:15:30,650 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:30,650 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444930650"}]},"ts":"1689444930650"} 2023-07-15 18:15:30,651 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 346 msec 2023-07-15 18:15:30,651 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-15 18:15:30,653 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:30,654 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 266 msec 2023-07-15 18:15:30,692 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:30,693 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42326, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:30,697 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-15 18:15:30,697 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-15 18:15:30,701 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:30,701 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:30,703 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 18:15:30,705 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689444929330] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-15 18:15:30,705 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-15 18:15:30,706 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:30,706 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:30,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-15 18:15:30,716 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:30,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-15 18:15:30,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-15 18:15:30,729 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:30,731 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-15 18:15:30,735 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-15 18:15:30,737 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-15 18:15:30,737 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.157sec 2023-07-15 18:15:30,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-15 18:15:30,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:30,739 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-15 18:15:30,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-15 18:15:30,740 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:30,741 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:30,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-15 18:15:30,742 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:30,743 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4 empty. 2023-07-15 18:15:30,743 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:30,743 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-15 18:15:30,746 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-15 18:15:30,747 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-15 18:15:30,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:30,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:30,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-15 18:15:30,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-15 18:15:30,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689444929330-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-15 18:15:30,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689444929330-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-15 18:15:30,750 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-15 18:15:30,756 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:30,757 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7485de1cb45abf1fe05520f5647ee2a4, NAME => 'hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp 2023-07-15 18:15:30,768 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:30,768 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 7485de1cb45abf1fe05520f5647ee2a4, disabling compactions & flushes 2023-07-15 18:15:30,768 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:30,768 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:30,768 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. after waiting 0 ms 2023-07-15 18:15:30,768 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:30,768 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:30,768 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 7485de1cb45abf1fe05520f5647ee2a4: 2023-07-15 18:15:30,771 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:30,772 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689444930771"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444930771"}]},"ts":"1689444930771"} 2023-07-15 18:15:30,773 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:30,773 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:30,774 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444930773"}]},"ts":"1689444930773"} 2023-07-15 18:15:30,775 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-15 18:15:30,777 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:30,777 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:30,777 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:30,777 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:30,777 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:30,778 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=7485de1cb45abf1fe05520f5647ee2a4, ASSIGN}] 2023-07-15 18:15:30,778 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=7485de1cb45abf1fe05520f5647ee2a4, ASSIGN 2023-07-15 18:15:30,779 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=7485de1cb45abf1fe05520f5647ee2a4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42683,1689444929532; forceNewPlan=false, retain=false 2023-07-15 18:15:30,785 DEBUG [Listener at localhost/44413] zookeeper.ReadOnlyZKClient(139): Connect 0x73b46b3f to 127.0.0.1:57464 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:30,791 DEBUG [Listener at localhost/44413] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e4139d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:30,792 DEBUG [hconnection-0x5830b6d3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:30,794 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52000, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:30,795 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:30,795 INFO [Listener at localhost/44413] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:30,798 DEBUG [Listener at localhost/44413] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-15 18:15:30,799 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56052, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-15 18:15:30,802 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-15 18:15:30,802 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:30,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-15 18:15:30,803 DEBUG [Listener at localhost/44413] zookeeper.ReadOnlyZKClient(139): Connect 0x0ae5f21b to 127.0.0.1:57464 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:30,811 DEBUG [Listener at localhost/44413] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@573e4bba, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:30,811 INFO [Listener at localhost/44413] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57464 2023-07-15 18:15:30,814 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:30,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016a325212000a connected 2023-07-15 18:15:30,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-15 18:15:30,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-15 18:15:30,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-15 18:15:30,837 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:30,840 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 19 msec 2023-07-15 18:15:30,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-15 18:15:30,929 INFO [jenkins-hbase4:44131] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:30,930 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7485de1cb45abf1fe05520f5647ee2a4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:30,931 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689444930930"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444930930"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444930930"}]},"ts":"1689444930930"} 2023-07-15 18:15:30,932 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure 7485de1cb45abf1fe05520f5647ee2a4, server=jenkins-hbase4.apache.org,42683,1689444929532}] 2023-07-15 18:15:30,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:30,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-15 18:15:30,936 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:30,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-15 18:15:30,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 18:15:30,937 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:30,938 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 18:15:30,939 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:30,941 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:30,941 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/827651a9f83386ab0edbcbeb8537f147 empty. 2023-07-15 18:15:30,942 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:30,942 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-15 18:15:30,953 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:30,954 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 827651a9f83386ab0edbcbeb8537f147, NAME => 'np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp 2023-07-15 18:15:30,967 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:30,967 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 827651a9f83386ab0edbcbeb8537f147, disabling compactions & flushes 2023-07-15 18:15:30,967 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:30,967 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:30,967 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. after waiting 0 ms 2023-07-15 18:15:30,967 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:30,967 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:30,967 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 827651a9f83386ab0edbcbeb8537f147: 2023-07-15 18:15:30,969 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:30,970 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444930970"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444930970"}]},"ts":"1689444930970"} 2023-07-15 18:15:30,971 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:30,972 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:30,972 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444930972"}]},"ts":"1689444930972"} 2023-07-15 18:15:30,973 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-15 18:15:30,976 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:30,976 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:30,977 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:30,977 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:30,977 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:30,977 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=827651a9f83386ab0edbcbeb8537f147, ASSIGN}] 2023-07-15 18:15:30,978 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=827651a9f83386ab0edbcbeb8537f147, ASSIGN 2023-07-15 18:15:30,978 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=827651a9f83386ab0edbcbeb8537f147, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42523,1689444929417; forceNewPlan=false, retain=false 2023-07-15 18:15:31,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 18:15:31,088 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:31,088 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7485de1cb45abf1fe05520f5647ee2a4, NAME => 'hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:31,088 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:31,088 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:31,088 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:31,088 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:31,090 INFO [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:31,091 DEBUG [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4/q 2023-07-15 18:15:31,091 DEBUG [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4/q 2023-07-15 18:15:31,091 INFO [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7485de1cb45abf1fe05520f5647ee2a4 columnFamilyName q 2023-07-15 18:15:31,092 INFO [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] regionserver.HStore(310): Store=7485de1cb45abf1fe05520f5647ee2a4/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:31,092 INFO [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:31,094 DEBUG [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4/u 2023-07-15 18:15:31,094 DEBUG [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4/u 2023-07-15 18:15:31,094 INFO [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7485de1cb45abf1fe05520f5647ee2a4 columnFamilyName u 2023-07-15 18:15:31,094 INFO [StoreOpener-7485de1cb45abf1fe05520f5647ee2a4-1] regionserver.HStore(310): Store=7485de1cb45abf1fe05520f5647ee2a4/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:31,095 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:31,096 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:31,097 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-15 18:15:31,099 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:31,101 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:31,102 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7485de1cb45abf1fe05520f5647ee2a4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11766428800, jitterRate=0.09583407640457153}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-15 18:15:31,102 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7485de1cb45abf1fe05520f5647ee2a4: 2023-07-15 18:15:31,102 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4., pid=15, masterSystemTime=1689444931084 2023-07-15 18:15:31,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:31,104 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:31,104 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7485de1cb45abf1fe05520f5647ee2a4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:31,104 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689444931104"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444931104"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444931104"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444931104"}]},"ts":"1689444931104"} 2023-07-15 18:15:31,107 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-15 18:15:31,107 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure 7485de1cb45abf1fe05520f5647ee2a4, server=jenkins-hbase4.apache.org,42683,1689444929532 in 174 msec 2023-07-15 18:15:31,109 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-15 18:15:31,109 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=7485de1cb45abf1fe05520f5647ee2a4, ASSIGN in 329 msec 2023-07-15 18:15:31,110 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:31,110 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444931110"}]},"ts":"1689444931110"} 2023-07-15 18:15:31,111 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-15 18:15:31,114 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:31,115 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 376 msec 2023-07-15 18:15:31,129 INFO [jenkins-hbase4:44131] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:31,130 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=827651a9f83386ab0edbcbeb8537f147, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:31,130 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444931130"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444931130"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444931130"}]},"ts":"1689444931130"} 2023-07-15 18:15:31,132 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 827651a9f83386ab0edbcbeb8537f147, server=jenkins-hbase4.apache.org,42523,1689444929417}] 2023-07-15 18:15:31,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 18:15:31,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:31,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 827651a9f83386ab0edbcbeb8537f147, NAME => 'np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:31,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:31,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,290 INFO [StoreOpener-827651a9f83386ab0edbcbeb8537f147-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,291 DEBUG [StoreOpener-827651a9f83386ab0edbcbeb8537f147-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/np1/table1/827651a9f83386ab0edbcbeb8537f147/fam1 2023-07-15 18:15:31,291 DEBUG [StoreOpener-827651a9f83386ab0edbcbeb8537f147-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/np1/table1/827651a9f83386ab0edbcbeb8537f147/fam1 2023-07-15 18:15:31,292 INFO [StoreOpener-827651a9f83386ab0edbcbeb8537f147-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 827651a9f83386ab0edbcbeb8537f147 columnFamilyName fam1 2023-07-15 18:15:31,292 INFO [StoreOpener-827651a9f83386ab0edbcbeb8537f147-1] regionserver.HStore(310): Store=827651a9f83386ab0edbcbeb8537f147/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:31,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/np1/table1/827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/np1/table1/827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/np1/table1/827651a9f83386ab0edbcbeb8537f147/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:31,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 827651a9f83386ab0edbcbeb8537f147; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11032773120, jitterRate=0.02750706672668457}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:31,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 827651a9f83386ab0edbcbeb8537f147: 2023-07-15 18:15:31,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147., pid=18, masterSystemTime=1689444931283 2023-07-15 18:15:31,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:31,301 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:31,301 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=827651a9f83386ab0edbcbeb8537f147, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:31,301 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444931301"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444931301"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444931301"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444931301"}]},"ts":"1689444931301"} 2023-07-15 18:15:31,305 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-15 18:15:31,305 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 827651a9f83386ab0edbcbeb8537f147, server=jenkins-hbase4.apache.org,42523,1689444929417 in 171 msec 2023-07-15 18:15:31,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-15 18:15:31,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=827651a9f83386ab0edbcbeb8537f147, ASSIGN in 328 msec 2023-07-15 18:15:31,307 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:31,307 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444931307"}]},"ts":"1689444931307"} 2023-07-15 18:15:31,309 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-15 18:15:31,312 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:31,314 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 379 msec 2023-07-15 18:15:31,493 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-15 18:15:31,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 18:15:31,541 INFO [Listener at localhost/44413] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-15 18:15:31,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:31,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-15 18:15:31,547 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:31,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-15 18:15:31,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-15 18:15:31,570 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=26 msec 2023-07-15 18:15:31,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-15 18:15:31,651 INFO [Listener at localhost/44413] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-15 18:15:31,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:31,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:31,653 INFO [Listener at localhost/44413] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-15 18:15:31,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-15 18:15:31,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-15 18:15:31,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 18:15:31,657 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444931657"}]},"ts":"1689444931657"} 2023-07-15 18:15:31,658 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-15 18:15:31,659 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-15 18:15:31,660 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=827651a9f83386ab0edbcbeb8537f147, UNASSIGN}] 2023-07-15 18:15:31,661 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=827651a9f83386ab0edbcbeb8537f147, UNASSIGN 2023-07-15 18:15:31,661 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=827651a9f83386ab0edbcbeb8537f147, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:31,661 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444931661"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444931661"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444931661"}]},"ts":"1689444931661"} 2023-07-15 18:15:31,663 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 827651a9f83386ab0edbcbeb8537f147, server=jenkins-hbase4.apache.org,42523,1689444929417}] 2023-07-15 18:15:31,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 18:15:31,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 827651a9f83386ab0edbcbeb8537f147, disabling compactions & flushes 2023-07-15 18:15:31,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:31,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:31,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. after waiting 0 ms 2023-07-15 18:15:31,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:31,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/np1/table1/827651a9f83386ab0edbcbeb8537f147/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:31,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147. 2023-07-15 18:15:31,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 827651a9f83386ab0edbcbeb8537f147: 2023-07-15 18:15:31,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,822 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=827651a9f83386ab0edbcbeb8537f147, regionState=CLOSED 2023-07-15 18:15:31,822 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444931822"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444931822"}]},"ts":"1689444931822"} 2023-07-15 18:15:31,825 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-15 18:15:31,825 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 827651a9f83386ab0edbcbeb8537f147, server=jenkins-hbase4.apache.org,42523,1689444929417 in 160 msec 2023-07-15 18:15:31,826 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-15 18:15:31,826 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=827651a9f83386ab0edbcbeb8537f147, UNASSIGN in 165 msec 2023-07-15 18:15:31,827 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444931827"}]},"ts":"1689444931827"} 2023-07-15 18:15:31,828 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-15 18:15:31,829 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-15 18:15:31,831 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 176 msec 2023-07-15 18:15:31,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 18:15:31,959 INFO [Listener at localhost/44413] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-15 18:15:31,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-15 18:15:31,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-15 18:15:31,962 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 18:15:31,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-15 18:15:31,963 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 18:15:31,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:31,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 18:15:31,967 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,968 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/827651a9f83386ab0edbcbeb8537f147/fam1, FileablePath, hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/827651a9f83386ab0edbcbeb8537f147/recovered.edits] 2023-07-15 18:15:31,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-15 18:15:31,983 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/827651a9f83386ab0edbcbeb8537f147/recovered.edits/4.seqid to hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/archive/data/np1/table1/827651a9f83386ab0edbcbeb8537f147/recovered.edits/4.seqid 2023-07-15 18:15:31,984 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/.tmp/data/np1/table1/827651a9f83386ab0edbcbeb8537f147 2023-07-15 18:15:31,984 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-15 18:15:31,993 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 18:15:31,995 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-15 18:15:31,997 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-15 18:15:32,001 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 18:15:32,001 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-15 18:15:32,001 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444932001"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:32,003 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 18:15:32,003 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 827651a9f83386ab0edbcbeb8537f147, NAME => 'np1:table1,,1689444930932.827651a9f83386ab0edbcbeb8537f147.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 18:15:32,003 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-15 18:15:32,003 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689444932003"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:32,005 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-15 18:15:32,009 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-15 18:15:32,011 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 49 msec 2023-07-15 18:15:32,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-15 18:15:32,070 INFO [Listener at localhost/44413] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-15 18:15:32,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-15 18:15:32,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-15 18:15:32,086 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 18:15:32,089 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 18:15:32,091 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 18:15:32,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-15 18:15:32,092 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-15 18:15:32,093 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:32,093 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 18:15:32,095 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-15 18:15:32,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 19 msec 2023-07-15 18:15:32,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-15 18:15:32,193 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-15 18:15:32,194 INFO [Listener at localhost/44413] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-15 18:15:32,194 DEBUG [Listener at localhost/44413] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x73b46b3f to 127.0.0.1:57464 2023-07-15 18:15:32,194 DEBUG [Listener at localhost/44413] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,194 DEBUG [Listener at localhost/44413] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-15 18:15:32,194 DEBUG [Listener at localhost/44413] util.JVMClusterUtil(257): Found active master hash=626739741, stopped=false 2023-07-15 18:15:32,194 DEBUG [Listener at localhost/44413] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 18:15:32,194 DEBUG [Listener at localhost/44413] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 18:15:32,194 DEBUG [Listener at localhost/44413] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-15 18:15:32,194 INFO [Listener at localhost/44413] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:32,196 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:32,196 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:32,196 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:32,196 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:32,196 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:32,196 INFO [Listener at localhost/44413] procedure2.ProcedureExecutor(629): Stopping 2023-07-15 18:15:32,198 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:32,198 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:32,198 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:32,198 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:32,198 DEBUG [Listener at localhost/44413] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39db3047 to 127.0.0.1:57464 2023-07-15 18:15:32,198 DEBUG [Listener at localhost/44413] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,198 INFO [Listener at localhost/44413] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42523,1689444929417' ***** 2023-07-15 18:15:32,198 INFO [Listener at localhost/44413] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:32,198 INFO [Listener at localhost/44413] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43891,1689444929490' ***** 2023-07-15 18:15:32,198 INFO [Listener at localhost/44413] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:32,198 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:32,199 INFO [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:32,199 INFO [Listener at localhost/44413] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42683,1689444929532' ***** 2023-07-15 18:15:32,206 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1064): Closing user regions 2023-07-15 18:15:32,206 INFO [Listener at localhost/44413] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:32,208 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(3305): Received CLOSE for 7485de1cb45abf1fe05520f5647ee2a4 2023-07-15 18:15:32,212 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(3305): Received CLOSE for fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:32,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7485de1cb45abf1fe05520f5647ee2a4, disabling compactions & flushes 2023-07-15 18:15:32,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:32,213 INFO [RS:0;jenkins-hbase4:42523] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@e86723d{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:32,213 INFO [RS:1;jenkins-hbase4:43891] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5eb598c2{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:32,213 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:32,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. after waiting 0 ms 2023-07-15 18:15:32,214 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:32,214 INFO [RS:0;jenkins-hbase4:42523] server.AbstractConnector(383): Stopped ServerConnector@32d4c09f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:32,214 INFO [RS:1;jenkins-hbase4:43891] server.AbstractConnector(383): Stopped ServerConnector@6b2894ff{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:32,214 INFO [RS:0;jenkins-hbase4:42523] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:32,214 INFO [RS:1;jenkins-hbase4:43891] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:32,214 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:32,215 INFO [RS:0;jenkins-hbase4:42523] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fd921dc{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:32,215 INFO [RS:1;jenkins-hbase4:43891] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ffc762d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:32,222 INFO [RS:0;jenkins-hbase4:42523] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3acbfa48{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:32,222 INFO [RS:1;jenkins-hbase4:43891] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@63baada7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:32,222 INFO [RS:2;jenkins-hbase4:42683] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1322d8e2{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:32,223 INFO [RS:2;jenkins-hbase4:42683] server.AbstractConnector(383): Stopped ServerConnector@6f673757{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:32,223 INFO [RS:2;jenkins-hbase4:42683] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:32,223 INFO [RS:2;jenkins-hbase4:42683] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64e8e413{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:32,223 INFO [RS:2;jenkins-hbase4:42683] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@fa12fc1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:32,223 INFO [RS:0;jenkins-hbase4:42523] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:32,223 INFO [RS:0;jenkins-hbase4:42523] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:32,223 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:32,223 INFO [RS:0;jenkins-hbase4:42523] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:32,224 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(3305): Received CLOSE for 79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:32,224 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:32,224 INFO [RS:2;jenkins-hbase4:42683] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:32,224 INFO [RS:2;jenkins-hbase4:42683] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:32,224 INFO [RS:2;jenkins-hbase4:42683] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:32,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 79dfe318ca9b6da52ea91d794974bcfd, disabling compactions & flushes 2023-07-15 18:15:32,225 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(3307): Received CLOSE for the region: fb66e1f4e78f045b48bf11a66e12cb90, which we are already trying to CLOSE, but not completed yet 2023-07-15 18:15:32,225 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:32,224 DEBUG [RS:0;jenkins-hbase4:42523] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x40bc7536 to 127.0.0.1:57464 2023-07-15 18:15:32,225 DEBUG [RS:0;jenkins-hbase4:42523] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,225 DEBUG [RS:2;jenkins-hbase4:42683] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2ca8e561 to 127.0.0.1:57464 2023-07-15 18:15:32,225 DEBUG [RS:2;jenkins-hbase4:42683] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,225 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:32,224 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:32,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:32,226 INFO [RS:2;jenkins-hbase4:42683] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:32,225 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 18:15:32,225 INFO [RS:1;jenkins-hbase4:43891] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:32,227 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1478): Online Regions={79dfe318ca9b6da52ea91d794974bcfd=hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd.} 2023-07-15 18:15:32,227 INFO [RS:1;jenkins-hbase4:43891] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:32,227 INFO [RS:2;jenkins-hbase4:42683] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:32,227 DEBUG [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1504): Waiting on 79dfe318ca9b6da52ea91d794974bcfd 2023-07-15 18:15:32,226 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. after waiting 0 ms 2023-07-15 18:15:32,227 INFO [RS:2;jenkins-hbase4:42683] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:32,227 INFO [RS:1;jenkins-hbase4:43891] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:32,227 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:32,227 INFO [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:32,227 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-15 18:15:32,228 DEBUG [RS:1;jenkins-hbase4:43891] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5177f365 to 127.0.0.1:57464 2023-07-15 18:15:32,228 DEBUG [RS:1;jenkins-hbase4:43891] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,228 INFO [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43891,1689444929490; all regions closed. 2023-07-15 18:15:32,227 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:32,229 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 18:15:32,229 DEBUG [RS:1;jenkins-hbase4:43891] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-15 18:15:32,229 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-15 18:15:32,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/quota/7485de1cb45abf1fe05520f5647ee2a4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:32,230 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 18:15:32,230 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 79dfe318ca9b6da52ea91d794974bcfd 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-15 18:15:32,231 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 18:15:32,231 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1478): Online Regions={7485de1cb45abf1fe05520f5647ee2a4=hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4., fb66e1f4e78f045b48bf11a66e12cb90=hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90., 1588230740=hbase:meta,,1.1588230740} 2023-07-15 18:15:32,231 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 18:15:32,231 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 18:15:32,231 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1504): Waiting on 1588230740, 7485de1cb45abf1fe05520f5647ee2a4, fb66e1f4e78f045b48bf11a66e12cb90 2023-07-15 18:15:32,231 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-15 18:15:32,232 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:32,233 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7485de1cb45abf1fe05520f5647ee2a4: 2023-07-15 18:15:32,233 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689444930738.7485de1cb45abf1fe05520f5647ee2a4. 2023-07-15 18:15:32,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fb66e1f4e78f045b48bf11a66e12cb90, disabling compactions & flushes 2023-07-15 18:15:32,235 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:32,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:32,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. after waiting 0 ms 2023-07-15 18:15:32,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:32,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fb66e1f4e78f045b48bf11a66e12cb90 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-15 18:15:32,253 DEBUG [RS:1;jenkins-hbase4:43891] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/oldWALs 2023-07-15 18:15:32,253 INFO [RS:1;jenkins-hbase4:43891] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43891%2C1689444929490:(num 1689444930008) 2023-07-15 18:15:32,253 DEBUG [RS:1;jenkins-hbase4:43891] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,253 INFO [RS:1;jenkins-hbase4:43891] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:32,254 INFO [RS:1;jenkins-hbase4:43891] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:32,254 INFO [RS:1;jenkins-hbase4:43891] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:32,254 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:32,254 INFO [RS:1;jenkins-hbase4:43891] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:32,254 INFO [RS:1;jenkins-hbase4:43891] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:32,255 INFO [RS:1;jenkins-hbase4:43891] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43891 2023-07-15 18:15:32,260 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:32,260 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:32,260 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:32,260 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:32,260 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43891,1689444929490 2023-07-15 18:15:32,260 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:32,260 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:32,267 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43891,1689444929490] 2023-07-15 18:15:32,268 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43891,1689444929490; numProcessing=1 2023-07-15 18:15:32,269 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43891,1689444929490 already deleted, retry=false 2023-07-15 18:15:32,269 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43891,1689444929490 expired; onlineServers=2 2023-07-15 18:15:32,282 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:32,282 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:32,282 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:32,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd/.tmp/m/a4041bea7c3b4b779b9f39d91e89b553 2023-07-15 18:15:32,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90/.tmp/info/4b529bef606a4b7e8cc7b17bc1acd67f 2023-07-15 18:15:32,310 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/.tmp/info/bebf44fbb13d44f2a86fcfeeca18f94a 2023-07-15 18:15:32,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4b529bef606a4b7e8cc7b17bc1acd67f 2023-07-15 18:15:32,314 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90/.tmp/info/4b529bef606a4b7e8cc7b17bc1acd67f as hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90/info/4b529bef606a4b7e8cc7b17bc1acd67f 2023-07-15 18:15:32,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4b529bef606a4b7e8cc7b17bc1acd67f 2023-07-15 18:15:32,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90/info/4b529bef606a4b7e8cc7b17bc1acd67f, entries=3, sequenceid=8, filesize=5.0 K 2023-07-15 18:15:32,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for fb66e1f4e78f045b48bf11a66e12cb90 in 89ms, sequenceid=8, compaction requested=false 2023-07-15 18:15:32,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-15 18:15:32,326 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bebf44fbb13d44f2a86fcfeeca18f94a 2023-07-15 18:15:32,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd/.tmp/m/a4041bea7c3b4b779b9f39d91e89b553 as hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd/m/a4041bea7c3b4b779b9f39d91e89b553 2023-07-15 18:15:32,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd/m/a4041bea7c3b4b779b9f39d91e89b553, entries=1, sequenceid=7, filesize=4.9 K 2023-07-15 18:15:32,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 79dfe318ca9b6da52ea91d794974bcfd in 106ms, sequenceid=7, compaction requested=false 2023-07-15 18:15:32,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-15 18:15:32,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/rsgroup/79dfe318ca9b6da52ea91d794974bcfd/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-15 18:15:32,358 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 18:15:32,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:32,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 79dfe318ca9b6da52ea91d794974bcfd: 2023-07-15 18:15:32,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689444930387.79dfe318ca9b6da52ea91d794974bcfd. 2023-07-15 18:15:32,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/namespace/fb66e1f4e78f045b48bf11a66e12cb90/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-15 18:15:32,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:32,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fb66e1f4e78f045b48bf11a66e12cb90: 2023-07-15 18:15:32,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689444930303.fb66e1f4e78f045b48bf11a66e12cb90. 2023-07-15 18:15:32,370 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/.tmp/rep_barrier/b25dce9b945a4b5bbcb550952f612d3a 2023-07-15 18:15:32,376 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b25dce9b945a4b5bbcb550952f612d3a 2023-07-15 18:15:32,400 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:32,400 INFO [RS:1;jenkins-hbase4:43891] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43891,1689444929490; zookeeper connection closed. 2023-07-15 18:15:32,400 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:43891-0x1016a3252120002, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:32,402 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2659a347] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2659a347 2023-07-15 18:15:32,427 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42523,1689444929417; all regions closed. 2023-07-15 18:15:32,427 DEBUG [RS:0;jenkins-hbase4:42523] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-15 18:15:32,431 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-15 18:15:32,433 DEBUG [RS:0;jenkins-hbase4:42523] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/oldWALs 2023-07-15 18:15:32,434 INFO [RS:0;jenkins-hbase4:42523] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42523%2C1689444929417:(num 1689444930023) 2023-07-15 18:15:32,434 DEBUG [RS:0;jenkins-hbase4:42523] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,434 INFO [RS:0;jenkins-hbase4:42523] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:32,434 INFO [RS:0;jenkins-hbase4:42523] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:32,434 INFO [RS:0;jenkins-hbase4:42523] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:32,434 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:32,434 INFO [RS:0;jenkins-hbase4:42523] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:32,434 INFO [RS:0;jenkins-hbase4:42523] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:32,435 INFO [RS:0;jenkins-hbase4:42523] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42523 2023-07-15 18:15:32,439 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:32,439 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:32,439 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42523,1689444929417 2023-07-15 18:15:32,440 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42523,1689444929417] 2023-07-15 18:15:32,440 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42523,1689444929417; numProcessing=2 2023-07-15 18:15:32,443 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42523,1689444929417 already deleted, retry=false 2023-07-15 18:15:32,443 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42523,1689444929417 expired; onlineServers=1 2023-07-15 18:15:32,632 DEBUG [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-15 18:15:32,790 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/.tmp/table/4d99297e53e5419b909a6f0a17188288 2023-07-15 18:15:32,797 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4d99297e53e5419b909a6f0a17188288 2023-07-15 18:15:32,797 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/.tmp/info/bebf44fbb13d44f2a86fcfeeca18f94a as hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/info/bebf44fbb13d44f2a86fcfeeca18f94a 2023-07-15 18:15:32,802 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bebf44fbb13d44f2a86fcfeeca18f94a 2023-07-15 18:15:32,803 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/info/bebf44fbb13d44f2a86fcfeeca18f94a, entries=32, sequenceid=31, filesize=8.5 K 2023-07-15 18:15:32,804 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/.tmp/rep_barrier/b25dce9b945a4b5bbcb550952f612d3a as hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/rep_barrier/b25dce9b945a4b5bbcb550952f612d3a 2023-07-15 18:15:32,810 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b25dce9b945a4b5bbcb550952f612d3a 2023-07-15 18:15:32,810 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/rep_barrier/b25dce9b945a4b5bbcb550952f612d3a, entries=1, sequenceid=31, filesize=4.9 K 2023-07-15 18:15:32,811 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/.tmp/table/4d99297e53e5419b909a6f0a17188288 as hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/table/4d99297e53e5419b909a6f0a17188288 2023-07-15 18:15:32,816 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4d99297e53e5419b909a6f0a17188288 2023-07-15 18:15:32,816 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/table/4d99297e53e5419b909a6f0a17188288, entries=8, sequenceid=31, filesize=5.2 K 2023-07-15 18:15:32,817 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 585ms, sequenceid=31, compaction requested=false 2023-07-15 18:15:32,817 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-15 18:15:32,825 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-15 18:15:32,826 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 18:15:32,826 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:32,826 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 18:15:32,827 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:32,832 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42683,1689444929532; all regions closed. 2023-07-15 18:15:32,832 DEBUG [RS:2;jenkins-hbase4:42683] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-15 18:15:32,837 DEBUG [RS:2;jenkins-hbase4:42683] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/oldWALs 2023-07-15 18:15:32,837 INFO [RS:2;jenkins-hbase4:42683] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42683%2C1689444929532.meta:.meta(num 1689444930217) 2023-07-15 18:15:32,843 DEBUG [RS:2;jenkins-hbase4:42683] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/oldWALs 2023-07-15 18:15:32,844 INFO [RS:2;jenkins-hbase4:42683] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42683%2C1689444929532:(num 1689444930026) 2023-07-15 18:15:32,844 DEBUG [RS:2;jenkins-hbase4:42683] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,844 INFO [RS:2;jenkins-hbase4:42683] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:32,844 INFO [RS:2;jenkins-hbase4:42683] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:32,844 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:32,845 INFO [RS:2;jenkins-hbase4:42683] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42683 2023-07-15 18:15:32,849 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42683,1689444929532 2023-07-15 18:15:32,849 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:32,851 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42683,1689444929532] 2023-07-15 18:15:32,851 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42683,1689444929532; numProcessing=3 2023-07-15 18:15:32,852 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42683,1689444929532 already deleted, retry=false 2023-07-15 18:15:32,852 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42683,1689444929532 expired; onlineServers=0 2023-07-15 18:15:32,852 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44131,1689444929330' ***** 2023-07-15 18:15:32,852 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-15 18:15:32,853 DEBUG [M:0;jenkins-hbase4:44131] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2164ca6e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:32,853 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:32,855 INFO [M:0;jenkins-hbase4:44131] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@d183284{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-15 18:15:32,855 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:32,855 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:32,856 INFO [M:0;jenkins-hbase4:44131] server.AbstractConnector(383): Stopped ServerConnector@298b6695{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:32,856 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:32,856 INFO [M:0;jenkins-hbase4:44131] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:32,856 INFO [M:0;jenkins-hbase4:44131] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@584176ac{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:32,856 INFO [M:0;jenkins-hbase4:44131] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f26d369{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:32,857 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44131,1689444929330 2023-07-15 18:15:32,857 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44131,1689444929330; all regions closed. 2023-07-15 18:15:32,857 DEBUG [M:0;jenkins-hbase4:44131] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:32,857 INFO [M:0;jenkins-hbase4:44131] master.HMaster(1491): Stopping master jetty server 2023-07-15 18:15:32,858 INFO [M:0;jenkins-hbase4:44131] server.AbstractConnector(383): Stopped ServerConnector@7ced8210{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:32,858 DEBUG [M:0;jenkins-hbase4:44131] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-15 18:15:32,858 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-15 18:15:32,858 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444929758] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444929758,5,FailOnTimeoutGroup] 2023-07-15 18:15:32,858 DEBUG [M:0;jenkins-hbase4:44131] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-15 18:15:32,860 INFO [M:0;jenkins-hbase4:44131] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-15 18:15:32,860 INFO [M:0;jenkins-hbase4:44131] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-15 18:15:32,858 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444929757] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444929757,5,FailOnTimeoutGroup] 2023-07-15 18:15:32,860 INFO [M:0;jenkins-hbase4:44131] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:32,860 DEBUG [M:0;jenkins-hbase4:44131] master.HMaster(1512): Stopping service threads 2023-07-15 18:15:32,861 INFO [M:0;jenkins-hbase4:44131] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-15 18:15:32,861 ERROR [M:0;jenkins-hbase4:44131] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-15 18:15:32,861 INFO [M:0;jenkins-hbase4:44131] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-15 18:15:32,861 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-15 18:15:32,862 DEBUG [M:0;jenkins-hbase4:44131] zookeeper.ZKUtil(398): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-15 18:15:32,862 WARN [M:0;jenkins-hbase4:44131] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-15 18:15:32,862 INFO [M:0;jenkins-hbase4:44131] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-15 18:15:32,863 INFO [M:0;jenkins-hbase4:44131] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-15 18:15:32,863 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 18:15:32,863 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:32,863 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:32,863 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 18:15:32,863 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:32,863 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.99 KB heapSize=109.13 KB 2023-07-15 18:15:32,878 INFO [M:0;jenkins-hbase4:44131] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.99 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/225385726a094df197ec0a20b5dd8955 2023-07-15 18:15:32,884 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/225385726a094df197ec0a20b5dd8955 as hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/225385726a094df197ec0a20b5dd8955 2023-07-15 18:15:32,890 INFO [M:0;jenkins-hbase4:44131] regionserver.HStore(1080): Added hdfs://localhost:33611/user/jenkins/test-data/ceb82792-1554-4442-554e-d0686ca2ac06/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/225385726a094df197ec0a20b5dd8955, entries=24, sequenceid=194, filesize=12.4 K 2023-07-15 18:15:32,895 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegion(2948): Finished flush of dataSize ~92.99 KB/95222, heapSize ~109.12 KB/111736, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=194, compaction requested=false 2023-07-15 18:15:32,903 INFO [RS:0;jenkins-hbase4:42523] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42523,1689444929417; zookeeper connection closed. 2023-07-15 18:15:32,903 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:32,903 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42523-0x1016a3252120001, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:32,928 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7edb47f5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7edb47f5 2023-07-15 18:15:32,931 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:32,931 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:32,936 INFO [M:0;jenkins-hbase4:44131] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-15 18:15:32,937 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:32,937 INFO [M:0;jenkins-hbase4:44131] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44131 2023-07-15 18:15:32,939 DEBUG [M:0;jenkins-hbase4:44131] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44131,1689444929330 already deleted, retry=false 2023-07-15 18:15:33,004 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:33,004 INFO [RS:2;jenkins-hbase4:42683] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42683,1689444929532; zookeeper connection closed. 2023-07-15 18:15:33,004 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): regionserver:42683-0x1016a3252120003, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:33,004 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@287d6221] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@287d6221 2023-07-15 18:15:33,004 INFO [Listener at localhost/44413] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-15 18:15:33,104 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:33,104 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016a3252120000, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:33,104 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44131,1689444929330; zookeeper connection closed. 2023-07-15 18:15:33,105 WARN [Listener at localhost/44413] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:33,113 INFO [Listener at localhost/44413] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:33,218 WARN [BP-1576242968-172.31.14.131-1689444928453 heartbeating to localhost/127.0.0.1:33611] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:33,219 WARN [BP-1576242968-172.31.14.131-1689444928453 heartbeating to localhost/127.0.0.1:33611] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1576242968-172.31.14.131-1689444928453 (Datanode Uuid 9df18cd0-931b-45d4-92eb-4d414f73bd61) service to localhost/127.0.0.1:33611 2023-07-15 18:15:33,220 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/dfs/data/data5/current/BP-1576242968-172.31.14.131-1689444928453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:33,220 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/dfs/data/data6/current/BP-1576242968-172.31.14.131-1689444928453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:33,224 WARN [Listener at localhost/44413] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:33,239 INFO [Listener at localhost/44413] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:33,344 WARN [BP-1576242968-172.31.14.131-1689444928453 heartbeating to localhost/127.0.0.1:33611] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:33,344 WARN [BP-1576242968-172.31.14.131-1689444928453 heartbeating to localhost/127.0.0.1:33611] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1576242968-172.31.14.131-1689444928453 (Datanode Uuid b3ab3c04-0734-4e7e-8cb7-9f28283b0006) service to localhost/127.0.0.1:33611 2023-07-15 18:15:33,344 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/dfs/data/data3/current/BP-1576242968-172.31.14.131-1689444928453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:33,345 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/dfs/data/data4/current/BP-1576242968-172.31.14.131-1689444928453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:33,346 WARN [Listener at localhost/44413] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:33,359 INFO [Listener at localhost/44413] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:33,463 WARN [BP-1576242968-172.31.14.131-1689444928453 heartbeating to localhost/127.0.0.1:33611] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:33,463 WARN [BP-1576242968-172.31.14.131-1689444928453 heartbeating to localhost/127.0.0.1:33611] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1576242968-172.31.14.131-1689444928453 (Datanode Uuid 2fa56dba-debc-4f4f-83f7-904135d525cc) service to localhost/127.0.0.1:33611 2023-07-15 18:15:33,464 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/dfs/data/data1/current/BP-1576242968-172.31.14.131-1689444928453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:33,465 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/cluster_e88f6413-6507-b8fa-07bf-45305d97c755/dfs/data/data2/current/BP-1576242968-172.31.14.131-1689444928453] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:33,479 INFO [Listener at localhost/44413] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:33,602 INFO [Listener at localhost/44413] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-15 18:15:33,637 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.log.dir so I do NOT create it in target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/d590d3d5-443c-cb27-4bc3-fadfe35f1e07/hadoop.tmp.dir so I do NOT create it in target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9, deleteOnExit=true 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/test.cache.data in system properties and HBase conf 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.tmp.dir in system properties and HBase conf 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir in system properties and HBase conf 2023-07-15 18:15:33,638 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-15 18:15:33,639 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-15 18:15:33,639 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-15 18:15:33,639 DEBUG [Listener at localhost/44413] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-15 18:15:33,639 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-15 18:15:33,639 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-15 18:15:33,640 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-15 18:15:33,640 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 18:15:33,640 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-15 18:15:33,640 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-15 18:15:33,640 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-15 18:15:33,641 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 18:15:33,641 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-15 18:15:33,641 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/nfs.dump.dir in system properties and HBase conf 2023-07-15 18:15:33,641 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/java.io.tmpdir in system properties and HBase conf 2023-07-15 18:15:33,641 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-15 18:15:33,641 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-15 18:15:33,642 INFO [Listener at localhost/44413] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-15 18:15:33,646 WARN [Listener at localhost/44413] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 18:15:33,647 WARN [Listener at localhost/44413] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 18:15:33,695 WARN [Listener at localhost/44413] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:15:33,697 DEBUG [Listener at localhost/44413-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1016a325212000a, quorum=127.0.0.1:57464, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-15 18:15:33,697 INFO [Listener at localhost/44413] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:15:33,700 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1016a325212000a, quorum=127.0.0.1:57464, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-15 18:15:33,707 INFO [Listener at localhost/44413] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/java.io.tmpdir/Jetty_localhost_42787_hdfs____vqjxsy/webapp 2023-07-15 18:15:33,814 INFO [Listener at localhost/44413] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42787 2023-07-15 18:15:33,819 WARN [Listener at localhost/44413] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-15 18:15:33,819 WARN [Listener at localhost/44413] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-15 18:15:33,860 WARN [Listener at localhost/46849] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:15:33,873 WARN [Listener at localhost/46849] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:15:33,875 WARN [Listener at localhost/46849] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:15:33,876 INFO [Listener at localhost/46849] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:15:33,881 INFO [Listener at localhost/46849] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/java.io.tmpdir/Jetty_localhost_42299_datanode____.1rskud/webapp 2023-07-15 18:15:33,975 INFO [Listener at localhost/46849] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42299 2023-07-15 18:15:33,984 WARN [Listener at localhost/35699] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:15:34,003 WARN [Listener at localhost/35699] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:15:34,005 WARN [Listener at localhost/35699] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:15:34,006 INFO [Listener at localhost/35699] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:15:34,011 INFO [Listener at localhost/35699] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/java.io.tmpdir/Jetty_localhost_44917_datanode____jd6zgm/webapp 2023-07-15 18:15:34,093 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xedee98d3503d69a: Processing first storage report for DS-c647eab0-693d-4c96-93bd-80faad671768 from datanode bb38fc4c-88be-4f8d-af8e-f7d4ec02f1c4 2023-07-15 18:15:34,093 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xedee98d3503d69a: from storage DS-c647eab0-693d-4c96-93bd-80faad671768 node DatanodeRegistration(127.0.0.1:39567, datanodeUuid=bb38fc4c-88be-4f8d-af8e-f7d4ec02f1c4, infoPort=33533, infoSecurePort=0, ipcPort=35699, storageInfo=lv=-57;cid=testClusterID;nsid=431530288;c=1689444933649), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:34,094 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xedee98d3503d69a: Processing first storage report for DS-3e2276b7-5718-4688-b4e5-c90362c355a7 from datanode bb38fc4c-88be-4f8d-af8e-f7d4ec02f1c4 2023-07-15 18:15:34,094 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xedee98d3503d69a: from storage DS-3e2276b7-5718-4688-b4e5-c90362c355a7 node DatanodeRegistration(127.0.0.1:39567, datanodeUuid=bb38fc4c-88be-4f8d-af8e-f7d4ec02f1c4, infoPort=33533, infoSecurePort=0, ipcPort=35699, storageInfo=lv=-57;cid=testClusterID;nsid=431530288;c=1689444933649), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:34,128 INFO [Listener at localhost/35699] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44917 2023-07-15 18:15:34,150 WARN [Listener at localhost/41859] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:15:34,170 WARN [Listener at localhost/41859] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-15 18:15:34,174 WARN [Listener at localhost/41859] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-15 18:15:34,176 INFO [Listener at localhost/41859] log.Slf4jLog(67): jetty-6.1.26 2023-07-15 18:15:34,186 INFO [Listener at localhost/41859] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/java.io.tmpdir/Jetty_localhost_44615_datanode____pb04ld/webapp 2023-07-15 18:15:34,274 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9b490b64ed8782f0: Processing first storage report for DS-fc713ac8-587a-420a-9dde-77f8992a4597 from datanode f6bad8fd-a126-4473-a4ab-68567571c96b 2023-07-15 18:15:34,274 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9b490b64ed8782f0: from storage DS-fc713ac8-587a-420a-9dde-77f8992a4597 node DatanodeRegistration(127.0.0.1:41525, datanodeUuid=f6bad8fd-a126-4473-a4ab-68567571c96b, infoPort=34961, infoSecurePort=0, ipcPort=41859, storageInfo=lv=-57;cid=testClusterID;nsid=431530288;c=1689444933649), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:34,274 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9b490b64ed8782f0: Processing first storage report for DS-5e145bdc-8e9f-4610-abf7-b97fbe1bfc08 from datanode f6bad8fd-a126-4473-a4ab-68567571c96b 2023-07-15 18:15:34,274 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9b490b64ed8782f0: from storage DS-5e145bdc-8e9f-4610-abf7-b97fbe1bfc08 node DatanodeRegistration(127.0.0.1:41525, datanodeUuid=f6bad8fd-a126-4473-a4ab-68567571c96b, infoPort=34961, infoSecurePort=0, ipcPort=41859, storageInfo=lv=-57;cid=testClusterID;nsid=431530288;c=1689444933649), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:34,304 INFO [Listener at localhost/41859] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44615 2023-07-15 18:15:34,313 WARN [Listener at localhost/32839] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-15 18:15:34,418 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeb3bedb4b5e0b68a: Processing first storage report for DS-435c7217-e6c9-4bf2-894f-b1e58d08c111 from datanode 6e3252db-c583-4276-9093-20737de8da0e 2023-07-15 18:15:34,418 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeb3bedb4b5e0b68a: from storage DS-435c7217-e6c9-4bf2-894f-b1e58d08c111 node DatanodeRegistration(127.0.0.1:46563, datanodeUuid=6e3252db-c583-4276-9093-20737de8da0e, infoPort=38369, infoSecurePort=0, ipcPort=32839, storageInfo=lv=-57;cid=testClusterID;nsid=431530288;c=1689444933649), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:34,419 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeb3bedb4b5e0b68a: Processing first storage report for DS-cb7a3643-7598-4bea-a16a-56af793d32dc from datanode 6e3252db-c583-4276-9093-20737de8da0e 2023-07-15 18:15:34,419 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeb3bedb4b5e0b68a: from storage DS-cb7a3643-7598-4bea-a16a-56af793d32dc node DatanodeRegistration(127.0.0.1:46563, datanodeUuid=6e3252db-c583-4276-9093-20737de8da0e, infoPort=38369, infoSecurePort=0, ipcPort=32839, storageInfo=lv=-57;cid=testClusterID;nsid=431530288;c=1689444933649), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-15 18:15:34,421 DEBUG [Listener at localhost/32839] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b 2023-07-15 18:15:34,425 INFO [Listener at localhost/32839] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/zookeeper_0, clientPort=63689, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-15 18:15:34,426 INFO [Listener at localhost/32839] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63689 2023-07-15 18:15:34,426 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,427 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,443 INFO [Listener at localhost/32839] util.FSUtils(471): Created version file at hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615 with version=8 2023-07-15 18:15:34,443 INFO [Listener at localhost/32839] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:44585/user/jenkins/test-data/3580e102-7075-f2b3-a69c-e8179e4f7955/hbase-staging 2023-07-15 18:15:34,444 DEBUG [Listener at localhost/32839] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-15 18:15:34,444 DEBUG [Listener at localhost/32839] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-15 18:15:34,444 DEBUG [Listener at localhost/32839] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-15 18:15:34,444 DEBUG [Listener at localhost/32839] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-15 18:15:34,445 INFO [Listener at localhost/32839] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:34,445 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,445 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,445 INFO [Listener at localhost/32839] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:34,445 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,446 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:34,446 INFO [Listener at localhost/32839] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:34,446 INFO [Listener at localhost/32839] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40787 2023-07-15 18:15:34,447 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,447 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,448 INFO [Listener at localhost/32839] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40787 connecting to ZooKeeper ensemble=127.0.0.1:63689 2023-07-15 18:15:34,457 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:407870x0, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:34,458 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40787-0x1016a32661a0000 connected 2023-07-15 18:15:34,471 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:34,472 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:34,472 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:34,473 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40787 2023-07-15 18:15:34,473 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40787 2023-07-15 18:15:34,473 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40787 2023-07-15 18:15:34,473 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40787 2023-07-15 18:15:34,473 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40787 2023-07-15 18:15:34,475 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:34,475 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:34,476 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:34,476 INFO [Listener at localhost/32839] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-15 18:15:34,476 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:34,476 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:34,476 INFO [Listener at localhost/32839] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:34,477 INFO [Listener at localhost/32839] http.HttpServer(1146): Jetty bound to port 38525 2023-07-15 18:15:34,477 INFO [Listener at localhost/32839] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:34,478 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,478 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@32c1d2f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:34,478 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,478 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@745c340f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:34,484 INFO [Listener at localhost/32839] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:34,484 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:34,484 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:34,485 INFO [Listener at localhost/32839] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 18:15:34,486 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,487 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1c3f7275{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-15 18:15:34,488 INFO [Listener at localhost/32839] server.AbstractConnector(333): Started ServerConnector@4f43c59f{HTTP/1.1, (http/1.1)}{0.0.0.0:38525} 2023-07-15 18:15:34,488 INFO [Listener at localhost/32839] server.Server(415): Started @40284ms 2023-07-15 18:15:34,488 INFO [Listener at localhost/32839] master.HMaster(444): hbase.rootdir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615, hbase.cluster.distributed=false 2023-07-15 18:15:34,501 INFO [Listener at localhost/32839] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:34,501 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,501 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,501 INFO [Listener at localhost/32839] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:34,501 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,502 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:34,502 INFO [Listener at localhost/32839] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:34,502 INFO [Listener at localhost/32839] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38289 2023-07-15 18:15:34,503 INFO [Listener at localhost/32839] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:34,504 DEBUG [Listener at localhost/32839] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:34,504 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,505 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,506 INFO [Listener at localhost/32839] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38289 connecting to ZooKeeper ensemble=127.0.0.1:63689 2023-07-15 18:15:34,510 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:382890x0, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:34,511 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38289-0x1016a32661a0001 connected 2023-07-15 18:15:34,511 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:34,512 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:34,513 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:34,514 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38289 2023-07-15 18:15:34,514 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38289 2023-07-15 18:15:34,515 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38289 2023-07-15 18:15:34,519 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38289 2023-07-15 18:15:34,519 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38289 2023-07-15 18:15:34,520 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:34,521 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:34,521 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:34,521 INFO [Listener at localhost/32839] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:34,521 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:34,521 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:34,521 INFO [Listener at localhost/32839] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:34,522 INFO [Listener at localhost/32839] http.HttpServer(1146): Jetty bound to port 42303 2023-07-15 18:15:34,522 INFO [Listener at localhost/32839] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:34,524 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,524 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@260d4e64{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:34,524 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,525 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@383351f3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:34,530 INFO [Listener at localhost/32839] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:34,531 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:34,531 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:34,531 INFO [Listener at localhost/32839] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 18:15:34,538 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,539 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@43ec58c0{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:34,540 INFO [Listener at localhost/32839] server.AbstractConnector(333): Started ServerConnector@452ba432{HTTP/1.1, (http/1.1)}{0.0.0.0:42303} 2023-07-15 18:15:34,540 INFO [Listener at localhost/32839] server.Server(415): Started @40336ms 2023-07-15 18:15:34,565 INFO [Listener at localhost/32839] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:34,566 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,566 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,566 INFO [Listener at localhost/32839] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:34,566 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,566 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:34,566 INFO [Listener at localhost/32839] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:34,577 INFO [Listener at localhost/32839] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32819 2023-07-15 18:15:34,577 INFO [Listener at localhost/32839] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:34,595 DEBUG [Listener at localhost/32839] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:34,596 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,598 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,599 INFO [Listener at localhost/32839] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32819 connecting to ZooKeeper ensemble=127.0.0.1:63689 2023-07-15 18:15:34,608 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:328190x0, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:34,610 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:328190x0, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:34,611 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:328190x0, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:34,613 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:328190x0, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:34,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32819-0x1016a32661a0002 connected 2023-07-15 18:15:34,664 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32819 2023-07-15 18:15:34,674 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32819 2023-07-15 18:15:34,697 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32819 2023-07-15 18:15:34,728 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32819 2023-07-15 18:15:34,728 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32819 2023-07-15 18:15:34,730 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:34,730 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:34,731 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:34,731 INFO [Listener at localhost/32839] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:34,731 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:34,731 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:34,731 INFO [Listener at localhost/32839] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:34,732 INFO [Listener at localhost/32839] http.HttpServer(1146): Jetty bound to port 36791 2023-07-15 18:15:34,732 INFO [Listener at localhost/32839] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:34,739 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,739 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@272d5fe5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:34,739 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,740 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7dc6a6e9{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:34,745 INFO [Listener at localhost/32839] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:34,746 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:34,746 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:34,746 INFO [Listener at localhost/32839] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 18:15:34,747 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,748 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@341f1006{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:34,751 INFO [Listener at localhost/32839] server.AbstractConnector(333): Started ServerConnector@3654aba1{HTTP/1.1, (http/1.1)}{0.0.0.0:36791} 2023-07-15 18:15:34,751 INFO [Listener at localhost/32839] server.Server(415): Started @40546ms 2023-07-15 18:15:34,763 INFO [Listener at localhost/32839] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:34,763 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,763 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,763 INFO [Listener at localhost/32839] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:34,763 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:34,763 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:34,763 INFO [Listener at localhost/32839] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:34,764 INFO [Listener at localhost/32839] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45011 2023-07-15 18:15:34,764 INFO [Listener at localhost/32839] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:34,769 DEBUG [Listener at localhost/32839] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:34,769 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,771 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:34,772 INFO [Listener at localhost/32839] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45011 connecting to ZooKeeper ensemble=127.0.0.1:63689 2023-07-15 18:15:34,778 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:450110x0, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:34,779 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:450110x0, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:34,780 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45011-0x1016a32661a0003 connected 2023-07-15 18:15:34,780 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:34,781 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:34,785 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45011 2023-07-15 18:15:34,786 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45011 2023-07-15 18:15:34,786 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45011 2023-07-15 18:15:34,791 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45011 2023-07-15 18:15:34,791 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45011 2023-07-15 18:15:34,793 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:34,793 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:34,793 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:34,794 INFO [Listener at localhost/32839] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:34,794 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:34,794 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:34,794 INFO [Listener at localhost/32839] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:34,795 INFO [Listener at localhost/32839] http.HttpServer(1146): Jetty bound to port 43759 2023-07-15 18:15:34,795 INFO [Listener at localhost/32839] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:34,804 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,804 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2aa6aee7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:34,805 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,805 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@54037bb0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:34,811 INFO [Listener at localhost/32839] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:34,812 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:34,812 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:34,812 INFO [Listener at localhost/32839] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-15 18:15:34,814 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:34,815 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2947125d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:34,817 INFO [Listener at localhost/32839] server.AbstractConnector(333): Started ServerConnector@6dcc53d9{HTTP/1.1, (http/1.1)}{0.0.0.0:43759} 2023-07-15 18:15:34,817 INFO [Listener at localhost/32839] server.Server(415): Started @40613ms 2023-07-15 18:15:34,820 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:34,830 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@373c44fd{HTTP/1.1, (http/1.1)}{0.0.0.0:41747} 2023-07-15 18:15:34,830 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @40625ms 2023-07-15 18:15:34,830 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:34,832 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 18:15:34,832 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:34,834 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:34,834 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:34,834 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:34,836 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:34,834 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:34,837 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 18:15:34,839 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40787,1689444934445 from backup master directory 2023-07-15 18:15:34,839 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 18:15:34,841 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:34,841 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:34,841 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-15 18:15:34,841 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:34,993 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/hbase.id with ID: 33a87105-64a0-4b73-9ffc-ef142eee8c56 2023-07-15 18:15:35,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:35,029 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:35,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3f9a9fba to 127.0.0.1:63689 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:35,089 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@131b19a9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:35,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:35,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-15 18:15:35,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:35,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store-tmp 2023-07-15 18:15:35,103 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:35,104 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 18:15:35,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:35,104 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:35,104 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 18:15:35,104 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:35,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:35,104 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:35,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/WALs/jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:35,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40787%2C1689444934445, suffix=, logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/WALs/jenkins-hbase4.apache.org,40787,1689444934445, archiveDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/oldWALs, maxLogs=10 2023-07-15 18:15:35,126 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK] 2023-07-15 18:15:35,126 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK] 2023-07-15 18:15:35,126 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK] 2023-07-15 18:15:35,137 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/WALs/jenkins-hbase4.apache.org,40787,1689444934445/jenkins-hbase4.apache.org%2C40787%2C1689444934445.1689444935107 2023-07-15 18:15:35,137 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK], DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK], DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK]] 2023-07-15 18:15:35,137 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:35,138 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:35,138 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:35,138 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:35,139 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:35,140 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-15 18:15:35,141 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-15 18:15:35,141 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:35,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:35,143 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:35,145 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-15 18:15:35,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:35,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11170520960, jitterRate=0.04033583402633667}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:35,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:35,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-15 18:15:35,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-15 18:15:35,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-15 18:15:35,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-15 18:15:35,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-15 18:15:35,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-15 18:15:35,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-15 18:15:35,150 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-15 18:15:35,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-15 18:15:35,152 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-15 18:15:35,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-15 18:15:35,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-15 18:15:35,155 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:35,155 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-15 18:15:35,156 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-15 18:15:35,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-15 18:15:35,160 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:35,160 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:35,160 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:35,162 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:35,162 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:35,163 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40787,1689444934445, sessionid=0x1016a32661a0000, setting cluster-up flag (Was=false) 2023-07-15 18:15:35,169 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:35,174 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-15 18:15:35,175 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:35,178 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:35,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-15 18:15:35,185 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:35,186 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.hbase-snapshot/.tmp 2023-07-15 18:15:35,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-15 18:15:35,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-15 18:15:35,187 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-15 18:15:35,188 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:35,188 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-15 18:15:35,189 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:35,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 18:15:35,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 18:15:35,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-15 18:15:35,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-15 18:15:35,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:35,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:35,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:35,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-15 18:15:35,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-15 18:15:35,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:35,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689444965204 2023-07-15 18:15:35,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-15 18:15:35,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-15 18:15:35,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-15 18:15:35,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-15 18:15:35,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-15 18:15:35,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-15 18:15:35,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,205 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:35,205 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-15 18:15:35,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-15 18:15:35,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-15 18:15:35,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-15 18:15:35,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-15 18:15:35,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-15 18:15:35,207 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444935207,5,FailOnTimeoutGroup] 2023-07-15 18:15:35,207 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444935207,5,FailOnTimeoutGroup] 2023-07-15 18:15:35,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-15 18:15:35,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,207 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:35,221 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(951): ClusterId : 33a87105-64a0-4b73-9ffc-ef142eee8c56 2023-07-15 18:15:35,221 DEBUG [RS:0;jenkins-hbase4:38289] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:35,221 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(951): ClusterId : 33a87105-64a0-4b73-9ffc-ef142eee8c56 2023-07-15 18:15:35,221 DEBUG [RS:1;jenkins-hbase4:32819] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:35,222 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(951): ClusterId : 33a87105-64a0-4b73-9ffc-ef142eee8c56 2023-07-15 18:15:35,222 DEBUG [RS:2;jenkins-hbase4:45011] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:35,223 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:35,224 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:35,224 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615 2023-07-15 18:15:35,225 DEBUG [RS:0;jenkins-hbase4:38289] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:35,225 DEBUG [RS:0;jenkins-hbase4:38289] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:35,226 DEBUG [RS:1;jenkins-hbase4:32819] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:35,226 DEBUG [RS:2;jenkins-hbase4:45011] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:35,226 DEBUG [RS:1;jenkins-hbase4:32819] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:35,226 DEBUG [RS:2;jenkins-hbase4:45011] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:35,229 DEBUG [RS:0;jenkins-hbase4:38289] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:35,232 DEBUG [RS:0;jenkins-hbase4:38289] zookeeper.ReadOnlyZKClient(139): Connect 0x102d08ba to 127.0.0.1:63689 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:35,241 DEBUG [RS:1;jenkins-hbase4:32819] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:35,242 DEBUG [RS:2;jenkins-hbase4:45011] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:35,245 DEBUG [RS:1;jenkins-hbase4:32819] zookeeper.ReadOnlyZKClient(139): Connect 0x0bcf76f8 to 127.0.0.1:63689 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:35,245 DEBUG [RS:2;jenkins-hbase4:45011] zookeeper.ReadOnlyZKClient(139): Connect 0x1c3a3843 to 127.0.0.1:63689 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:35,257 DEBUG [RS:0;jenkins-hbase4:38289] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1eeaf4cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:35,257 DEBUG [RS:0;jenkins-hbase4:38289] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ad563ca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:35,260 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:35,262 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 18:15:35,264 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/info 2023-07-15 18:15:35,264 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 18:15:35,265 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:35,265 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 18:15:35,266 DEBUG [RS:2;jenkins-hbase4:45011] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5938b5a0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:35,266 DEBUG [RS:2;jenkins-hbase4:45011] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5de72e7f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:35,267 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:35,268 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 18:15:35,274 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:35,274 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 18:15:35,276 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/table 2023-07-15 18:15:35,276 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 18:15:35,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:35,279 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740 2023-07-15 18:15:35,279 DEBUG [RS:1;jenkins-hbase4:32819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ff0f359, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:35,279 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740 2023-07-15 18:15:35,279 DEBUG [RS:1;jenkins-hbase4:32819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4af27cc3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:35,280 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:45011 2023-07-15 18:15:35,280 INFO [RS:2;jenkins-hbase4:45011] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:35,280 INFO [RS:2;jenkins-hbase4:45011] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:35,280 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:35,281 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40787,1689444934445 with isa=jenkins-hbase4.apache.org/172.31.14.131:45011, startcode=1689444934762 2023-07-15 18:15:35,281 DEBUG [RS:2;jenkins-hbase4:45011] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:35,283 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 18:15:35,283 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34221, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:35,285 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40787] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,286 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:35,286 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-15 18:15:35,287 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615 2023-07-15 18:15:35,287 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46849 2023-07-15 18:15:35,287 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38525 2023-07-15 18:15:35,287 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 18:15:35,287 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38289 2023-07-15 18:15:35,287 INFO [RS:0;jenkins-hbase4:38289] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:35,287 INFO [RS:0;jenkins-hbase4:38289] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:35,287 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:35,288 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:35,288 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40787,1689444934445 with isa=jenkins-hbase4.apache.org/172.31.14.131:38289, startcode=1689444934501 2023-07-15 18:15:35,289 DEBUG [RS:0;jenkins-hbase4:38289] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:35,289 DEBUG [RS:2;jenkins-hbase4:45011] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,289 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45011,1689444934762] 2023-07-15 18:15:35,289 WARN [RS:2;jenkins-hbase4:45011] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:35,289 INFO [RS:2;jenkins-hbase4:45011] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:35,290 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,290 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55219, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:35,291 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40787] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,291 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:35,291 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-15 18:15:35,291 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615 2023-07-15 18:15:35,291 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46849 2023-07-15 18:15:35,291 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38525 2023-07-15 18:15:35,292 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:32819 2023-07-15 18:15:35,292 INFO [RS:1;jenkins-hbase4:32819] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:35,292 INFO [RS:1;jenkins-hbase4:32819] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:35,292 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:35,293 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:35,295 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11576936000, jitterRate=0.07818618416786194}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 18:15:35,295 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 18:15:35,295 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 18:15:35,295 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 18:15:35,295 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 18:15:35,295 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 18:15:35,295 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 18:15:35,296 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:35,296 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 18:15:35,300 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-15 18:15:35,301 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-15 18:15:35,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-15 18:15:35,301 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40787,1689444934445 with isa=jenkins-hbase4.apache.org/172.31.14.131:32819, startcode=1689444934565 2023-07-15 18:15:35,301 DEBUG [RS:1;jenkins-hbase4:32819] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:35,302 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:35,303 DEBUG [RS:0;jenkins-hbase4:38289] zookeeper.ZKUtil(162): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,303 WARN [RS:0;jenkins-hbase4:38289] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:35,303 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50989, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:35,303 INFO [RS:0;jenkins-hbase4:38289] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:35,303 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-15 18:15:35,303 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,303 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40787] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,304 DEBUG [RS:2;jenkins-hbase4:45011] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,305 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:35,305 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-15 18:15:35,305 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-15 18:15:35,305 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615 2023-07-15 18:15:35,305 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46849 2023-07-15 18:15:35,305 DEBUG [RS:2;jenkins-hbase4:45011] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,305 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38525 2023-07-15 18:15:35,307 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38289,1689444934501] 2023-07-15 18:15:35,309 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:35,309 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:35,309 DEBUG [RS:1;jenkins-hbase4:32819] zookeeper.ZKUtil(162): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,309 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32819,1689444934565] 2023-07-15 18:15:35,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,309 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:35,309 WARN [RS:1;jenkins-hbase4:32819] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:35,310 INFO [RS:2;jenkins-hbase4:45011] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:35,310 INFO [RS:1;jenkins-hbase4:32819] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:35,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,310 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,311 DEBUG [RS:0;jenkins-hbase4:38289] zookeeper.ZKUtil(162): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,311 DEBUG [RS:0;jenkins-hbase4:38289] zookeeper.ZKUtil(162): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,312 DEBUG [RS:0;jenkins-hbase4:38289] zookeeper.ZKUtil(162): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,313 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:35,313 INFO [RS:0;jenkins-hbase4:38289] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:35,318 INFO [RS:2;jenkins-hbase4:45011] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:35,322 INFO [RS:0;jenkins-hbase4:38289] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:35,322 INFO [RS:2;jenkins-hbase4:45011] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:35,322 INFO [RS:2;jenkins-hbase4:45011] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,322 INFO [RS:0;jenkins-hbase4:38289] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:35,322 INFO [RS:0;jenkins-hbase4:38289] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,323 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:35,324 DEBUG [RS:1;jenkins-hbase4:32819] zookeeper.ZKUtil(162): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,324 DEBUG [RS:1;jenkins-hbase4:32819] zookeeper.ZKUtil(162): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,324 DEBUG [RS:1;jenkins-hbase4:32819] zookeeper.ZKUtil(162): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,325 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:35,325 INFO [RS:1;jenkins-hbase4:32819] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:35,331 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:35,331 INFO [RS:1;jenkins-hbase4:32819] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:35,332 INFO [RS:1;jenkins-hbase4:32819] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:35,332 INFO [RS:1;jenkins-hbase4:32819] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,332 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:35,332 INFO [RS:0;jenkins-hbase4:38289] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,332 INFO [RS:2;jenkins-hbase4:45011] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,333 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,333 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,333 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,333 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 INFO [RS:1;jenkins-hbase4:32819] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,334 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:35,334 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:35,334 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,334 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:35,335 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:2;jenkins-hbase4:45011] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:0;jenkins-hbase4:38289] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,335 DEBUG [RS:1;jenkins-hbase4:32819] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:35,344 INFO [RS:1;jenkins-hbase4:32819] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,344 INFO [RS:1;jenkins-hbase4:32819] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,344 INFO [RS:1;jenkins-hbase4:32819] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,344 INFO [RS:2;jenkins-hbase4:45011] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,344 INFO [RS:2;jenkins-hbase4:45011] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,344 INFO [RS:2;jenkins-hbase4:45011] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,357 INFO [RS:2;jenkins-hbase4:45011] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:35,357 INFO [RS:2;jenkins-hbase4:45011] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45011,1689444934762-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,358 INFO [RS:1;jenkins-hbase4:32819] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:35,358 INFO [RS:1;jenkins-hbase4:32819] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32819,1689444934565-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,366 INFO [RS:0;jenkins-hbase4:38289] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,367 INFO [RS:0;jenkins-hbase4:38289] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,367 INFO [RS:0;jenkins-hbase4:38289] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,383 INFO [RS:2;jenkins-hbase4:45011] regionserver.Replication(203): jenkins-hbase4.apache.org,45011,1689444934762 started 2023-07-15 18:15:35,383 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45011,1689444934762, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45011, sessionid=0x1016a32661a0003 2023-07-15 18:15:35,383 DEBUG [RS:2;jenkins-hbase4:45011] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:35,383 DEBUG [RS:2;jenkins-hbase4:45011] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,383 DEBUG [RS:2;jenkins-hbase4:45011] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45011,1689444934762' 2023-07-15 18:15:35,383 DEBUG [RS:2;jenkins-hbase4:45011] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:35,384 DEBUG [RS:2;jenkins-hbase4:45011] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:35,384 INFO [RS:0;jenkins-hbase4:38289] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:35,384 DEBUG [RS:2;jenkins-hbase4:45011] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:35,384 DEBUG [RS:2;jenkins-hbase4:45011] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:35,384 INFO [RS:0;jenkins-hbase4:38289] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38289,1689444934501-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,384 DEBUG [RS:2;jenkins-hbase4:45011] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,386 DEBUG [RS:2;jenkins-hbase4:45011] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45011,1689444934762' 2023-07-15 18:15:35,386 DEBUG [RS:2;jenkins-hbase4:45011] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:35,386 DEBUG [RS:2;jenkins-hbase4:45011] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:35,387 DEBUG [RS:2;jenkins-hbase4:45011] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:35,387 INFO [RS:2;jenkins-hbase4:45011] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 18:15:35,387 INFO [RS:2;jenkins-hbase4:45011] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 18:15:35,389 INFO [RS:1;jenkins-hbase4:32819] regionserver.Replication(203): jenkins-hbase4.apache.org,32819,1689444934565 started 2023-07-15 18:15:35,389 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32819,1689444934565, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32819, sessionid=0x1016a32661a0002 2023-07-15 18:15:35,389 DEBUG [RS:1;jenkins-hbase4:32819] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:35,389 DEBUG [RS:1;jenkins-hbase4:32819] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,389 DEBUG [RS:1;jenkins-hbase4:32819] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32819,1689444934565' 2023-07-15 18:15:35,390 DEBUG [RS:1;jenkins-hbase4:32819] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:35,390 DEBUG [RS:1;jenkins-hbase4:32819] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:35,390 DEBUG [RS:1;jenkins-hbase4:32819] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:35,390 DEBUG [RS:1;jenkins-hbase4:32819] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:35,391 DEBUG [RS:1;jenkins-hbase4:32819] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,391 DEBUG [RS:1;jenkins-hbase4:32819] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32819,1689444934565' 2023-07-15 18:15:35,391 DEBUG [RS:1;jenkins-hbase4:32819] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:35,391 DEBUG [RS:1;jenkins-hbase4:32819] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:35,392 DEBUG [RS:1;jenkins-hbase4:32819] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:35,392 INFO [RS:1;jenkins-hbase4:32819] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 18:15:35,392 INFO [RS:1;jenkins-hbase4:32819] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 18:15:35,402 INFO [RS:0;jenkins-hbase4:38289] regionserver.Replication(203): jenkins-hbase4.apache.org,38289,1689444934501 started 2023-07-15 18:15:35,403 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38289,1689444934501, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38289, sessionid=0x1016a32661a0001 2023-07-15 18:15:35,403 DEBUG [RS:0;jenkins-hbase4:38289] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:35,403 DEBUG [RS:0;jenkins-hbase4:38289] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,403 DEBUG [RS:0;jenkins-hbase4:38289] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38289,1689444934501' 2023-07-15 18:15:35,403 DEBUG [RS:0;jenkins-hbase4:38289] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:35,403 DEBUG [RS:0;jenkins-hbase4:38289] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:35,404 DEBUG [RS:0;jenkins-hbase4:38289] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:35,404 DEBUG [RS:0;jenkins-hbase4:38289] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:35,404 DEBUG [RS:0;jenkins-hbase4:38289] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,404 DEBUG [RS:0;jenkins-hbase4:38289] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38289,1689444934501' 2023-07-15 18:15:35,404 DEBUG [RS:0;jenkins-hbase4:38289] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:35,404 DEBUG [RS:0;jenkins-hbase4:38289] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:35,404 DEBUG [RS:0;jenkins-hbase4:38289] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:35,404 INFO [RS:0;jenkins-hbase4:38289] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 18:15:35,404 INFO [RS:0;jenkins-hbase4:38289] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 18:15:35,455 DEBUG [jenkins-hbase4:40787] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-15 18:15:35,456 DEBUG [jenkins-hbase4:40787] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:35,456 DEBUG [jenkins-hbase4:40787] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:35,456 DEBUG [jenkins-hbase4:40787] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:35,456 DEBUG [jenkins-hbase4:40787] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:35,456 DEBUG [jenkins-hbase4:40787] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:35,457 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38289,1689444934501, state=OPENING 2023-07-15 18:15:35,458 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-15 18:15:35,460 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:35,460 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:35,460 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38289,1689444934501}] 2023-07-15 18:15:35,489 INFO [RS:2;jenkins-hbase4:45011] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45011%2C1689444934762, suffix=, logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,45011,1689444934762, archiveDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs, maxLogs=32 2023-07-15 18:15:35,493 INFO [RS:1;jenkins-hbase4:32819] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32819%2C1689444934565, suffix=, logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,32819,1689444934565, archiveDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs, maxLogs=32 2023-07-15 18:15:35,494 WARN [ReadOnlyZKClient-127.0.0.1:63689@0x3f9a9fba] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-15 18:15:35,494 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:35,496 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58018, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:35,496 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38289] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:58018 deadline: 1689444995496, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,508 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK] 2023-07-15 18:15:35,508 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK] 2023-07-15 18:15:35,509 INFO [RS:0;jenkins-hbase4:38289] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38289%2C1689444934501, suffix=, logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,38289,1689444934501, archiveDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs, maxLogs=32 2023-07-15 18:15:35,509 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK] 2023-07-15 18:15:35,514 INFO [RS:2;jenkins-hbase4:45011] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,45011,1689444934762/jenkins-hbase4.apache.org%2C45011%2C1689444934762.1689444935489 2023-07-15 18:15:35,514 DEBUG [RS:2;jenkins-hbase4:45011] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK], DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK], DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK]] 2023-07-15 18:15:35,518 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK] 2023-07-15 18:15:35,518 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK] 2023-07-15 18:15:35,518 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK] 2023-07-15 18:15:35,522 INFO [RS:1;jenkins-hbase4:32819] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,32819,1689444934565/jenkins-hbase4.apache.org%2C32819%2C1689444934565.1689444935494 2023-07-15 18:15:35,523 DEBUG [RS:1;jenkins-hbase4:32819] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK], DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK], DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK]] 2023-07-15 18:15:35,526 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK] 2023-07-15 18:15:35,526 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK] 2023-07-15 18:15:35,526 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK] 2023-07-15 18:15:35,529 INFO [RS:0;jenkins-hbase4:38289] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,38289,1689444934501/jenkins-hbase4.apache.org%2C38289%2C1689444934501.1689444935509 2023-07-15 18:15:35,529 DEBUG [RS:0;jenkins-hbase4:38289] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK], DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK], DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK]] 2023-07-15 18:15:35,615 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:35,616 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:35,618 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58024, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:35,621 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-15 18:15:35,621 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:35,623 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38289%2C1689444934501.meta, suffix=.meta, logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,38289,1689444934501, archiveDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs, maxLogs=32 2023-07-15 18:15:35,637 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK] 2023-07-15 18:15:35,637 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK] 2023-07-15 18:15:35,638 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK] 2023-07-15 18:15:35,642 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,38289,1689444934501/jenkins-hbase4.apache.org%2C38289%2C1689444934501.meta.1689444935623.meta 2023-07-15 18:15:35,642 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK], DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK], DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK]] 2023-07-15 18:15:35,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:35,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 18:15:35,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-15 18:15:35,643 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-15 18:15:35,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-15 18:15:35,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:35,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-15 18:15:35,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-15 18:15:35,645 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-15 18:15:35,646 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/info 2023-07-15 18:15:35,646 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/info 2023-07-15 18:15:35,646 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-15 18:15:35,647 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:35,647 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-15 18:15:35,648 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:35,648 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/rep_barrier 2023-07-15 18:15:35,649 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-15 18:15:35,649 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:35,649 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-15 18:15:35,650 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/table 2023-07-15 18:15:35,650 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/table 2023-07-15 18:15:35,650 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-15 18:15:35,651 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:35,652 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740 2023-07-15 18:15:35,653 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740 2023-07-15 18:15:35,655 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-15 18:15:35,657 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-15 18:15:35,658 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10464651840, jitterRate=-0.025403350591659546}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-15 18:15:35,658 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-15 18:15:35,662 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689444935615 2023-07-15 18:15:35,668 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-15 18:15:35,668 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-15 18:15:35,669 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38289,1689444934501, state=OPEN 2023-07-15 18:15:35,671 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-15 18:15:35,672 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-15 18:15:35,673 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-15 18:15:35,673 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38289,1689444934501 in 211 msec 2023-07-15 18:15:35,675 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-15 18:15:35,675 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 372 msec 2023-07-15 18:15:35,677 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 487 msec 2023-07-15 18:15:35,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689444935677, completionTime=-1 2023-07-15 18:15:35,677 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-15 18:15:35,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-15 18:15:35,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-15 18:15:35,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689444995682 2023-07-15 18:15:35,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689445055682 2023-07-15 18:15:35,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-15 18:15:35,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40787,1689444934445-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40787,1689444934445-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40787,1689444934445-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40787, period=300000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:35,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-15 18:15:35,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:35,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-15 18:15:35,693 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-15 18:15:35,693 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:35,694 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:35,696 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:35,696 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242 empty. 2023-07-15 18:15:35,697 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:35,697 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-15 18:15:35,709 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:35,710 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 041cc93b165c8cbb6d01c8a8caefe242, NAME => 'hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp 2023-07-15 18:15:35,720 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:35,720 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 041cc93b165c8cbb6d01c8a8caefe242, disabling compactions & flushes 2023-07-15 18:15:35,720 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:35,720 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:35,720 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. after waiting 0 ms 2023-07-15 18:15:35,720 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:35,720 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:35,720 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 041cc93b165c8cbb6d01c8a8caefe242: 2023-07-15 18:15:35,722 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:35,723 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444935723"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444935723"}]},"ts":"1689444935723"} 2023-07-15 18:15:35,725 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:35,726 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:35,726 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444935726"}]},"ts":"1689444935726"} 2023-07-15 18:15:35,727 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-15 18:15:35,731 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:35,731 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:35,731 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:35,731 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:35,731 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:35,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=041cc93b165c8cbb6d01c8a8caefe242, ASSIGN}] 2023-07-15 18:15:35,733 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=041cc93b165c8cbb6d01c8a8caefe242, ASSIGN 2023-07-15 18:15:35,734 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=041cc93b165c8cbb6d01c8a8caefe242, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45011,1689444934762; forceNewPlan=false, retain=false 2023-07-15 18:15:35,800 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:35,802 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-15 18:15:35,804 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:35,804 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:35,806 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:35,806 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a empty. 2023-07-15 18:15:35,807 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:35,807 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-15 18:15:35,817 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:35,819 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 79d04ed49523ba28c3f52d06fb1d144a, NAME => 'hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp 2023-07-15 18:15:35,829 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:35,829 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 79d04ed49523ba28c3f52d06fb1d144a, disabling compactions & flushes 2023-07-15 18:15:35,829 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:35,829 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:35,829 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. after waiting 0 ms 2023-07-15 18:15:35,829 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:35,829 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:35,829 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 79d04ed49523ba28c3f52d06fb1d144a: 2023-07-15 18:15:35,832 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:35,832 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444935832"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444935832"}]},"ts":"1689444935832"} 2023-07-15 18:15:35,834 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:35,834 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:35,835 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444935834"}]},"ts":"1689444935834"} 2023-07-15 18:15:35,836 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-15 18:15:35,838 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:35,838 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:35,838 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:35,839 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:35,839 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:35,839 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=79d04ed49523ba28c3f52d06fb1d144a, ASSIGN}] 2023-07-15 18:15:35,840 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=79d04ed49523ba28c3f52d06fb1d144a, ASSIGN 2023-07-15 18:15:35,840 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=79d04ed49523ba28c3f52d06fb1d144a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32819,1689444934565; forceNewPlan=false, retain=false 2023-07-15 18:15:35,840 INFO [jenkins-hbase4:40787] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-15 18:15:35,842 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=041cc93b165c8cbb6d01c8a8caefe242, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,842 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444935842"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444935842"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444935842"}]},"ts":"1689444935842"} 2023-07-15 18:15:35,843 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=79d04ed49523ba28c3f52d06fb1d144a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,843 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444935843"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444935843"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444935843"}]},"ts":"1689444935843"} 2023-07-15 18:15:35,843 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 041cc93b165c8cbb6d01c8a8caefe242, server=jenkins-hbase4.apache.org,45011,1689444934762}] 2023-07-15 18:15:35,844 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 79d04ed49523ba28c3f52d06fb1d144a, server=jenkins-hbase4.apache.org,32819,1689444934565}] 2023-07-15 18:15:35,997 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:35,997 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:35,997 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:35,997 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-15 18:15:35,999 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40338, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:35,999 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37464, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-15 18:15:36,011 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:36,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 041cc93b165c8cbb6d01c8a8caefe242, NAME => 'hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:36,011 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:36,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 79d04ed49523ba28c3f52d06fb1d144a, NAME => 'hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:36,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:36,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:36,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-15 18:15:36,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:36,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. service=MultiRowMutationService 2023-07-15 18:15:36,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:36,012 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-15 18:15:36,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:36,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:36,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:36,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:36,013 INFO [StoreOpener-041cc93b165c8cbb6d01c8a8caefe242-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:36,013 INFO [StoreOpener-79d04ed49523ba28c3f52d06fb1d144a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:36,014 DEBUG [StoreOpener-041cc93b165c8cbb6d01c8a8caefe242-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242/info 2023-07-15 18:15:36,014 DEBUG [StoreOpener-041cc93b165c8cbb6d01c8a8caefe242-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242/info 2023-07-15 18:15:36,014 DEBUG [StoreOpener-79d04ed49523ba28c3f52d06fb1d144a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a/m 2023-07-15 18:15:36,014 DEBUG [StoreOpener-79d04ed49523ba28c3f52d06fb1d144a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a/m 2023-07-15 18:15:36,015 INFO [StoreOpener-79d04ed49523ba28c3f52d06fb1d144a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 79d04ed49523ba28c3f52d06fb1d144a columnFamilyName m 2023-07-15 18:15:36,015 INFO [StoreOpener-041cc93b165c8cbb6d01c8a8caefe242-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 041cc93b165c8cbb6d01c8a8caefe242 columnFamilyName info 2023-07-15 18:15:36,015 INFO [StoreOpener-79d04ed49523ba28c3f52d06fb1d144a-1] regionserver.HStore(310): Store=79d04ed49523ba28c3f52d06fb1d144a/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:36,015 INFO [StoreOpener-041cc93b165c8cbb6d01c8a8caefe242-1] regionserver.HStore(310): Store=041cc93b165c8cbb6d01c8a8caefe242/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:36,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:36,016 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:36,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:36,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:36,019 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:36,020 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:36,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:36,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:36,022 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 041cc93b165c8cbb6d01c8a8caefe242; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10883785760, jitterRate=0.013631537556648254}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:36,022 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 79d04ed49523ba28c3f52d06fb1d144a; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1c9071a4, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:36,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 041cc93b165c8cbb6d01c8a8caefe242: 2023-07-15 18:15:36,022 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 79d04ed49523ba28c3f52d06fb1d144a: 2023-07-15 18:15:36,023 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a., pid=9, masterSystemTime=1689444935997 2023-07-15 18:15:36,023 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242., pid=8, masterSystemTime=1689444935997 2023-07-15 18:15:36,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:36,027 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:36,028 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=041cc93b165c8cbb6d01c8a8caefe242, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:36,028 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689444936028"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444936028"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444936028"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444936028"}]},"ts":"1689444936028"} 2023-07-15 18:15:36,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:36,029 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:36,037 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=79d04ed49523ba28c3f52d06fb1d144a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:36,037 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689444936037"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444936037"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444936037"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444936037"}]},"ts":"1689444936037"} 2023-07-15 18:15:36,038 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-15 18:15:36,038 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 041cc93b165c8cbb6d01c8a8caefe242, server=jenkins-hbase4.apache.org,45011,1689444934762 in 187 msec 2023-07-15 18:15:36,041 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-15 18:15:36,041 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=041cc93b165c8cbb6d01c8a8caefe242, ASSIGN in 307 msec 2023-07-15 18:15:36,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-15 18:15:36,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 79d04ed49523ba28c3f52d06fb1d144a, server=jenkins-hbase4.apache.org,32819,1689444934565 in 195 msec 2023-07-15 18:15:36,042 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:36,042 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444936042"}]},"ts":"1689444936042"} 2023-07-15 18:15:36,044 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-15 18:15:36,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-15 18:15:36,044 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=79d04ed49523ba28c3f52d06fb1d144a, ASSIGN in 203 msec 2023-07-15 18:15:36,044 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:36,044 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444936044"}]},"ts":"1689444936044"} 2023-07-15 18:15:36,046 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-15 18:15:36,046 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:36,047 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 355 msec 2023-07-15 18:15:36,048 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:36,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 249 msec 2023-07-15 18:15:36,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-15 18:15:36,094 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:36,095 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:36,097 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:36,099 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37478, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:36,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-15 18:15:36,105 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:36,107 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:36,109 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-15 18:15:36,109 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-15 18:15:36,115 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:36,118 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 16 msec 2023-07-15 18:15:36,118 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:36,118 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:36,119 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 18:15:36,120 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-15 18:15:36,123 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-15 18:15:36,129 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:36,136 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-07-15 18:15:36,147 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-15 18:15:36,150 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-15 18:15:36,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.309sec 2023-07-15 18:15:36,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-15 18:15:36,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-15 18:15:36,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-15 18:15:36,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40787,1689444934445-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-15 18:15:36,150 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40787,1689444934445-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-15 18:15:36,151 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-15 18:15:36,221 DEBUG [Listener at localhost/32839] zookeeper.ReadOnlyZKClient(139): Connect 0x0ac83e46 to 127.0.0.1:63689 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:36,227 DEBUG [Listener at localhost/32839] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@789ee5b5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:36,229 DEBUG [hconnection-0x69dad501-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:36,230 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58038, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:36,232 INFO [Listener at localhost/32839] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:36,232 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:36,234 DEBUG [Listener at localhost/32839] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-15 18:15:36,235 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47360, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-15 18:15:36,239 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-15 18:15:36,239 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:36,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-15 18:15:36,240 DEBUG [Listener at localhost/32839] zookeeper.ReadOnlyZKClient(139): Connect 0x6ad59b44 to 127.0.0.1:63689 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:36,250 DEBUG [Listener at localhost/32839] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@551839c2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:36,251 INFO [Listener at localhost/32839] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63689 2023-07-15 18:15:36,254 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:36,255 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016a32661a000a connected 2023-07-15 18:15:36,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:36,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:36,260 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-15 18:15:36,272 INFO [Listener at localhost/32839] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-15 18:15:36,272 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:36,272 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:36,272 INFO [Listener at localhost/32839] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-15 18:15:36,272 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-15 18:15:36,272 INFO [Listener at localhost/32839] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-15 18:15:36,272 INFO [Listener at localhost/32839] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-15 18:15:36,273 INFO [Listener at localhost/32839] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42585 2023-07-15 18:15:36,273 INFO [Listener at localhost/32839] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-15 18:15:36,276 DEBUG [Listener at localhost/32839] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-15 18:15:36,276 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:36,277 INFO [Listener at localhost/32839] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-15 18:15:36,278 INFO [Listener at localhost/32839] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42585 connecting to ZooKeeper ensemble=127.0.0.1:63689 2023-07-15 18:15:36,282 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:425850x0, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-15 18:15:36,284 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42585-0x1016a32661a000b connected 2023-07-15 18:15:36,284 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(162): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-15 18:15:36,285 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(162): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-15 18:15:36,285 DEBUG [Listener at localhost/32839] zookeeper.ZKUtil(164): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-15 18:15:36,286 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42585 2023-07-15 18:15:36,286 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42585 2023-07-15 18:15:36,286 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42585 2023-07-15 18:15:36,286 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42585 2023-07-15 18:15:36,289 DEBUG [Listener at localhost/32839] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42585 2023-07-15 18:15:36,291 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-15 18:15:36,291 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-15 18:15:36,291 INFO [Listener at localhost/32839] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-15 18:15:36,291 INFO [Listener at localhost/32839] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-15 18:15:36,292 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-15 18:15:36,292 INFO [Listener at localhost/32839] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-15 18:15:36,292 INFO [Listener at localhost/32839] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-15 18:15:36,292 INFO [Listener at localhost/32839] http.HttpServer(1146): Jetty bound to port 42295 2023-07-15 18:15:36,292 INFO [Listener at localhost/32839] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-15 18:15:36,297 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:36,298 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@320b0f5c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,AVAILABLE} 2023-07-15 18:15:36,298 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:36,298 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39ee291c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-15 18:15:36,304 INFO [Listener at localhost/32839] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-15 18:15:36,304 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-15 18:15:36,305 INFO [Listener at localhost/32839] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-15 18:15:36,305 INFO [Listener at localhost/32839] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-15 18:15:36,306 INFO [Listener at localhost/32839] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-15 18:15:36,307 INFO [Listener at localhost/32839] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@283b6a7e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:36,308 INFO [Listener at localhost/32839] server.AbstractConnector(333): Started ServerConnector@6969afd9{HTTP/1.1, (http/1.1)}{0.0.0.0:42295} 2023-07-15 18:15:36,308 INFO [Listener at localhost/32839] server.Server(415): Started @42104ms 2023-07-15 18:15:36,311 INFO [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(951): ClusterId : 33a87105-64a0-4b73-9ffc-ef142eee8c56 2023-07-15 18:15:36,311 DEBUG [RS:3;jenkins-hbase4:42585] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-15 18:15:36,312 DEBUG [RS:3;jenkins-hbase4:42585] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-15 18:15:36,313 DEBUG [RS:3;jenkins-hbase4:42585] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-15 18:15:36,315 DEBUG [RS:3;jenkins-hbase4:42585] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-15 18:15:36,317 DEBUG [RS:3;jenkins-hbase4:42585] zookeeper.ReadOnlyZKClient(139): Connect 0x75c266bb to 127.0.0.1:63689 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-15 18:15:36,322 DEBUG [RS:3;jenkins-hbase4:42585] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b059673, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-15 18:15:36,322 DEBUG [RS:3;jenkins-hbase4:42585] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7f3a0dcb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:36,330 DEBUG [RS:3;jenkins-hbase4:42585] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:42585 2023-07-15 18:15:36,331 INFO [RS:3;jenkins-hbase4:42585] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-15 18:15:36,331 INFO [RS:3;jenkins-hbase4:42585] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-15 18:15:36,331 DEBUG [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1022): About to register with Master. 2023-07-15 18:15:36,331 INFO [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40787,1689444934445 with isa=jenkins-hbase4.apache.org/172.31.14.131:42585, startcode=1689444936271 2023-07-15 18:15:36,331 DEBUG [RS:3;jenkins-hbase4:42585] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-15 18:15:36,333 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41261, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-15 18:15:36,333 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40787] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,334 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-15 18:15:36,334 DEBUG [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615 2023-07-15 18:15:36,334 DEBUG [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46849 2023-07-15 18:15:36,334 DEBUG [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38525 2023-07-15 18:15:36,339 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:36,339 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:36,339 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:36,339 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:36,339 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:36,339 DEBUG [RS:3;jenkins-hbase4:42585] zookeeper.ZKUtil(162): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,339 WARN [RS:3;jenkins-hbase4:42585] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-15 18:15:36,339 INFO [RS:3;jenkins-hbase4:42585] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-15 18:15:36,339 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42585,1689444936271] 2023-07-15 18:15:36,339 DEBUG [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,340 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-15 18:15:36,340 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:36,342 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:36,342 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:36,342 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-15 18:15:36,342 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:36,348 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:36,348 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:36,349 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:36,349 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:36,349 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,349 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:36,350 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,350 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,351 DEBUG [RS:3;jenkins-hbase4:42585] zookeeper.ZKUtil(162): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:36,351 DEBUG [RS:3;jenkins-hbase4:42585] zookeeper.ZKUtil(162): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:36,352 DEBUG [RS:3;jenkins-hbase4:42585] zookeeper.ZKUtil(162): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:36,352 DEBUG [RS:3;jenkins-hbase4:42585] zookeeper.ZKUtil(162): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,353 DEBUG [RS:3;jenkins-hbase4:42585] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-15 18:15:36,353 INFO [RS:3;jenkins-hbase4:42585] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-15 18:15:36,354 INFO [RS:3;jenkins-hbase4:42585] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-15 18:15:36,354 INFO [RS:3;jenkins-hbase4:42585] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-15 18:15:36,354 INFO [RS:3;jenkins-hbase4:42585] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:36,354 INFO [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-15 18:15:36,356 INFO [RS:3;jenkins-hbase4:42585] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:36,356 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,356 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,357 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,357 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,357 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,357 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-15 18:15:36,357 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,357 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,357 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,357 DEBUG [RS:3;jenkins-hbase4:42585] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-15 18:15:36,358 INFO [RS:3;jenkins-hbase4:42585] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:36,358 INFO [RS:3;jenkins-hbase4:42585] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:36,358 INFO [RS:3;jenkins-hbase4:42585] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:36,368 INFO [RS:3;jenkins-hbase4:42585] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-15 18:15:36,369 INFO [RS:3;jenkins-hbase4:42585] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42585,1689444936271-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-15 18:15:36,379 INFO [RS:3;jenkins-hbase4:42585] regionserver.Replication(203): jenkins-hbase4.apache.org,42585,1689444936271 started 2023-07-15 18:15:36,379 INFO [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42585,1689444936271, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42585, sessionid=0x1016a32661a000b 2023-07-15 18:15:36,379 DEBUG [RS:3;jenkins-hbase4:42585] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-15 18:15:36,379 DEBUG [RS:3;jenkins-hbase4:42585] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,379 DEBUG [RS:3;jenkins-hbase4:42585] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42585,1689444936271' 2023-07-15 18:15:36,379 DEBUG [RS:3;jenkins-hbase4:42585] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-15 18:15:36,380 DEBUG [RS:3;jenkins-hbase4:42585] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-15 18:15:36,380 DEBUG [RS:3;jenkins-hbase4:42585] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-15 18:15:36,380 DEBUG [RS:3;jenkins-hbase4:42585] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-15 18:15:36,380 DEBUG [RS:3;jenkins-hbase4:42585] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:36,380 DEBUG [RS:3;jenkins-hbase4:42585] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42585,1689444936271' 2023-07-15 18:15:36,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:36,380 DEBUG [RS:3;jenkins-hbase4:42585] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-15 18:15:36,380 DEBUG [RS:3;jenkins-hbase4:42585] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-15 18:15:36,381 DEBUG [RS:3;jenkins-hbase4:42585] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-15 18:15:36,381 INFO [RS:3;jenkins-hbase4:42585] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-15 18:15:36,381 INFO [RS:3;jenkins-hbase4:42585] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-15 18:15:36,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:36,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:36,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:36,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:36,386 DEBUG [hconnection-0x39a8e7f3-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:36,388 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58042, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:36,391 DEBUG [hconnection-0x39a8e7f3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-15 18:15:36,394 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40342, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-15 18:15:36,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:36,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:36,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:36,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:36,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:47360 deadline: 1689446136399, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:36,399 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:36,400 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:36,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:36,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:36,402 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:36,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:36,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:36,456 INFO [Listener at localhost/32839] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=560 (was 510) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@59f4a23a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:33611 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1119514286@qtp-763879266-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44917 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1873184685-2193-acceptor-0@7c9d64c6-ServerConnector@4f43c59f{HTTP/1.1, (http/1.1)}{0.0.0.0:38525} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@3ebd36fd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365190706-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData-prefix:jenkins-hbase4.apache.org,40787,1689444934445 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server idle connection scanner for port 46849 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-3bd88177-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data1/current/BP-44005676-172.31.14.131-1689444933649 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1237270076-2562 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 35699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 1 on default port 46849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1206518675-2253 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x102d08ba-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@27f96f26 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1217917147_17 at /127.0.0.1:55662 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@15aa16a9[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:46849 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44413-SendThread(127.0.0.1:57464) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x0bcf76f8-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/32839-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/32839 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444935207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: qtp299462523-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@51bd1ca8 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 32839 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp299462523-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1237270076-2564 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp972799410-2297 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_352568567_17 at /127.0.0.1:49194 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1206518675-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp299462523-2224-acceptor-0@65e6657b-ServerConnector@452ba432{HTTP/1.1, (http/1.1)}{0.0.0.0:42303} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 41859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1206518675-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1203471862_17 at /127.0.0.1:48860 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:63689): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1237270076-2563-acceptor-0@7f57b066-ServerConnector@6969afd9{HTTP/1.1, (http/1.1)}{0.0.0.0:42295} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/32839.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x77bee678-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp299462523-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 32839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x77bee678-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 32839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 46849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:42585Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x6ad59b44 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@15409de9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-78adc80b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:46849 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_499887494_17 at /127.0.0.1:55650 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:33611 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 46849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1237270076-2566 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data5/current/BP-44005676-172.31.14.131-1689444933649 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 32839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/32839-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1935415884@qtp-1656567826-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44615 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data4/current/BP-44005676-172.31.14.131-1689444933649 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x6ad59b44-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1217917147_17 at /127.0.0.1:55658 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 318997012@qtp-44519302-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42299 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1873184685-2196 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:38289Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40787,1689444934445 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp1365190706-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 41859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp299462523-2230 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x0ac83e46 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp972799410-2294 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1206518675-2260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1206518675-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1237270076-2568 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x0bcf76f8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33611 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:33611 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365190706-2283 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:42585-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x102d08ba sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x0ac83e46-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp972799410-2301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:33611 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x75c266bb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp972799410-2300 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 32839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1873184685-2192 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp972799410-2298-acceptor-0@ca38a46-ServerConnector@373c44fd{HTTP/1.1, (http/1.1)}{0.0.0.0:41747} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2c012120 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Listener at localhost/44413-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42585 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615-prefix:jenkins-hbase4.apache.org,38289,1689444934501 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-565-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@46ba7984 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1203471862_17 at /127.0.0.1:49238 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57464@0x0ae5f21b-SendThread(127.0.0.1:57464) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: jenkins-hbase4:32819Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x102d08ba-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:33611 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp299462523-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data3/current/BP-44005676-172.31.14.131-1689444933649 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/32839-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615-prefix:jenkins-hbase4.apache.org,38289,1689444934501.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365190706-2284-acceptor-0@6756b2a8-ServerConnector@6dcc53d9{HTTP/1.1, (http/1.1)}{0.0.0.0:43759} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp299462523-2228 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x1c3a3843-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x75c266bb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1206518675-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:45011Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x77bee678-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:2;jenkins-hbase4:45011 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x69dad501-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615-prefix:jenkins-hbase4.apache.org,45011,1689444934762 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:46849 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 1916652564@qtp-1744748046-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4bf2ed3b java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:46849 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x3f9a9fba-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 35699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1237270076-2569 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:46849 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-25cae59d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:33611 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_499887494_17 at /127.0.0.1:48848 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:38289-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@5df847ae java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:32819-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 754689695@qtp-1656567826-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 0 on default port 35699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-6fedfb9b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615-prefix:jenkins-hbase4.apache.org,32819,1689444934565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:32819 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57464@0x0ae5f21b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/32839-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1203471862_17 at /127.0.0.1:55652 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-560-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 46849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1217917147_17 at /127.0.0.1:49248 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:63689 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: IPC Server handler 0 on default port 41859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x77bee678-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:46849 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_352568567_17 at /127.0.0.1:48828 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x1c3a3843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1237270076-2565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:40787 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 775781514@qtp-1744748046-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42787 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@638ff0f3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365190706-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1203471862_17 at /127.0.0.1:48938 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444935207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1873184685-2195 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 46849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3012ed27 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/32839.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS:2;jenkins-hbase4:45011-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x6ad59b44-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_499887494_17 at /127.0.0.1:49226 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1206518675-2254-acceptor-0@5bc3d24c-ServerConnector@3654aba1{HTTP/1.1, (http/1.1)}{0.0.0.0:36791} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x0bcf76f8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x3f9a9fba sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1d65ba36 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-14576aea-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:33611 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1365190706-2289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:33611 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689444929330 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x1c3a3843-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1873184685-2194 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 41859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1873184685-2198 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35699 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data6/current/BP-44005676-172.31.14.131-1689444933649 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45011 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 41859 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:0;jenkins-hbase4:38289 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1206518675-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:46849 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_352568567_17 at /127.0.0.1:49154 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:46849 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x0ac83e46-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5963e692[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1237270076-2567 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x77bee678-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1440056898@qtp-44519302-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1217917147_17 at /127.0.0.1:49254 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:40787 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data2/current/BP-44005676-172.31.14.131-1689444933649 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x75c266bb-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp972799410-2296 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@230cd395[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63689@0x3f9a9fba-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1873184685-2197 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: 557330062@qtp-763879266-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp299462523-2223 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1365190706-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp972799410-2295 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/876776504.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1365190706-2290 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57464@0x0ae5f21b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1574563468.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:46849 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1217917147_17 at /127.0.0.1:48880 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77bee678-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1217917147_17 at /127.0.0.1:48866 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 32839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 3 on default port 41859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_499887494_17 at /127.0.0.1:55608 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 35699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1771576432) connection to localhost/127.0.0.1:46849 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x39a8e7f3-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-44005676-172.31.14.131-1689444933649:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32839-SendThread(127.0.0.1:63689) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@48b2603b java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x39a8e7f3-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1873184685-2199 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1771570797) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_352568567_17 at /127.0.0.1:55622 [Receiving block BP-44005676-172.31.14.131-1689444933649:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp972799410-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=854 (was 794) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=398 (was 408), ProcessCount=172 (was 172), AvailableMemoryMB=2654 (was 2884) 2023-07-15 18:15:36,459 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-15 18:15:36,479 INFO [Listener at localhost/32839] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=560, OpenFileDescriptor=854, MaxFileDescriptor=60000, SystemLoadAverage=398, ProcessCount=172, AvailableMemoryMB=2653 2023-07-15 18:15:36,479 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-15 18:15:36,479 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-15 18:15:36,483 INFO [RS:3;jenkins-hbase4:42585] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42585%2C1689444936271, suffix=, logDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,42585,1689444936271, archiveDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs, maxLogs=32 2023-07-15 18:15:36,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:36,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:36,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:36,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:36,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:36,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:36,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:36,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:36,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:36,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:36,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:36,492 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:36,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:36,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:36,495 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:36,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:36,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:36,502 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK] 2023-07-15 18:15:36,502 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK] 2023-07-15 18:15:36,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:36,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:36,503 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK] 2023-07-15 18:15:36,505 INFO [RS:3;jenkins-hbase4:42585] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/WALs/jenkins-hbase4.apache.org,42585,1689444936271/jenkins-hbase4.apache.org%2C42585%2C1689444936271.1689444936483 2023-07-15 18:15:36,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:36,505 DEBUG [RS:3;jenkins-hbase4:42585] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46563,DS-435c7217-e6c9-4bf2-894f-b1e58d08c111,DISK], DatanodeInfoWithStorage[127.0.0.1:41525,DS-fc713ac8-587a-420a-9dde-77f8992a4597,DISK], DatanodeInfoWithStorage[127.0.0.1:39567,DS-c647eab0-693d-4c96-93bd-80faad671768,DISK]] 2023-07-15 18:15:36,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:36,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:47360 deadline: 1689446136505, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:36,506 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:36,507 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:36,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:36,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:36,508 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:36,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:36,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:36,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:36,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-15 18:15:36,512 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:36,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-15 18:15:36,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:36,514 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:36,514 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:36,515 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:36,517 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-15 18:15:36,518 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,518 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff empty. 2023-07-15 18:15:36,519 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,519 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-15 18:15:36,531 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-15 18:15:36,532 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 104c4c2be77786e9983bd8fc5daf5aff, NAME => 't1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp 2023-07-15 18:15:36,542 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:36,542 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 104c4c2be77786e9983bd8fc5daf5aff, disabling compactions & flushes 2023-07-15 18:15:36,542 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:36,542 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:36,542 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. after waiting 0 ms 2023-07-15 18:15:36,542 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:36,542 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:36,542 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 104c4c2be77786e9983bd8fc5daf5aff: 2023-07-15 18:15:36,544 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-15 18:15:36,545 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444936545"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444936545"}]},"ts":"1689444936545"} 2023-07-15 18:15:36,546 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-15 18:15:36,547 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-15 18:15:36,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444936547"}]},"ts":"1689444936547"} 2023-07-15 18:15:36,548 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-15 18:15:36,552 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-15 18:15:36,552 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-15 18:15:36,552 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-15 18:15:36,552 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-15 18:15:36,552 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-15 18:15:36,552 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-15 18:15:36,552 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=104c4c2be77786e9983bd8fc5daf5aff, ASSIGN}] 2023-07-15 18:15:36,553 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=104c4c2be77786e9983bd8fc5daf5aff, ASSIGN 2023-07-15 18:15:36,554 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=104c4c2be77786e9983bd8fc5daf5aff, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32819,1689444934565; forceNewPlan=false, retain=false 2023-07-15 18:15:36,590 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-15 18:15:36,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:36,704 INFO [jenkins-hbase4:40787] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-15 18:15:36,705 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=104c4c2be77786e9983bd8fc5daf5aff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:36,705 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444936705"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444936705"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444936705"}]},"ts":"1689444936705"} 2023-07-15 18:15:36,706 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 104c4c2be77786e9983bd8fc5daf5aff, server=jenkins-hbase4.apache.org,32819,1689444934565}] 2023-07-15 18:15:36,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:36,861 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:36,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 104c4c2be77786e9983bd8fc5daf5aff, NAME => 't1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.', STARTKEY => '', ENDKEY => ''} 2023-07-15 18:15:36,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-15 18:15:36,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,862 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,863 INFO [StoreOpener-104c4c2be77786e9983bd8fc5daf5aff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,864 DEBUG [StoreOpener-104c4c2be77786e9983bd8fc5daf5aff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff/cf1 2023-07-15 18:15:36,864 DEBUG [StoreOpener-104c4c2be77786e9983bd8fc5daf5aff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff/cf1 2023-07-15 18:15:36,865 INFO [StoreOpener-104c4c2be77786e9983bd8fc5daf5aff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 104c4c2be77786e9983bd8fc5daf5aff columnFamilyName cf1 2023-07-15 18:15:36,865 INFO [StoreOpener-104c4c2be77786e9983bd8fc5daf5aff-1] regionserver.HStore(310): Store=104c4c2be77786e9983bd8fc5daf5aff/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-15 18:15:36,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:36,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-15 18:15:36,871 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 104c4c2be77786e9983bd8fc5daf5aff; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11172713600, jitterRate=0.04054003953933716}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-15 18:15:36,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 104c4c2be77786e9983bd8fc5daf5aff: 2023-07-15 18:15:36,872 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff., pid=14, masterSystemTime=1689444936858 2023-07-15 18:15:36,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:36,873 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:36,874 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=104c4c2be77786e9983bd8fc5daf5aff, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:36,874 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444936873"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689444936873"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689444936873"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689444936873"}]},"ts":"1689444936873"} 2023-07-15 18:15:36,876 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-15 18:15:36,876 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 104c4c2be77786e9983bd8fc5daf5aff, server=jenkins-hbase4.apache.org,32819,1689444934565 in 169 msec 2023-07-15 18:15:36,878 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-15 18:15:36,878 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=104c4c2be77786e9983bd8fc5daf5aff, ASSIGN in 324 msec 2023-07-15 18:15:36,878 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-15 18:15:36,878 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444936878"}]},"ts":"1689444936878"} 2023-07-15 18:15:36,879 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-15 18:15:36,881 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-15 18:15:36,883 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 371 msec 2023-07-15 18:15:37,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-15 18:15:37,116 INFO [Listener at localhost/32839] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-15 18:15:37,116 DEBUG [Listener at localhost/32839] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-15 18:15:37,117 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,119 INFO [Listener at localhost/32839] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-15 18:15:37,119 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,119 INFO [Listener at localhost/32839] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-15 18:15:37,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-15 18:15:37,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-15 18:15:37,123 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-15 18:15:37,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-15 18:15:37,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:47360 deadline: 1689444997120, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-15 18:15:37,125 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,131 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-15 18:15:37,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:37,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:37,227 INFO [Listener at localhost/32839] client.HBaseAdmin$15(890): Started disable of t1 2023-07-15 18:15:37,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-15 18:15:37,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-15 18:15:37,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 18:15:37,231 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444937231"}]},"ts":"1689444937231"} 2023-07-15 18:15:37,232 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-15 18:15:37,234 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-15 18:15:37,235 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=104c4c2be77786e9983bd8fc5daf5aff, UNASSIGN}] 2023-07-15 18:15:37,235 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=104c4c2be77786e9983bd8fc5daf5aff, UNASSIGN 2023-07-15 18:15:37,236 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=104c4c2be77786e9983bd8fc5daf5aff, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:37,236 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444937236"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689444937236"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689444937236"}]},"ts":"1689444937236"} 2023-07-15 18:15:37,238 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 104c4c2be77786e9983bd8fc5daf5aff, server=jenkins-hbase4.apache.org,32819,1689444934565}] 2023-07-15 18:15:37,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 18:15:37,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:37,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 104c4c2be77786e9983bd8fc5daf5aff, disabling compactions & flushes 2023-07-15 18:15:37,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:37,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:37,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. after waiting 0 ms 2023-07-15 18:15:37,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:37,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-15 18:15:37,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff. 2023-07-15 18:15:37,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 104c4c2be77786e9983bd8fc5daf5aff: 2023-07-15 18:15:37,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:37,397 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=104c4c2be77786e9983bd8fc5daf5aff, regionState=CLOSED 2023-07-15 18:15:37,397 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689444937397"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689444937397"}]},"ts":"1689444937397"} 2023-07-15 18:15:37,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-15 18:15:37,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 104c4c2be77786e9983bd8fc5daf5aff, server=jenkins-hbase4.apache.org,32819,1689444934565 in 160 msec 2023-07-15 18:15:37,401 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-15 18:15:37,401 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=104c4c2be77786e9983bd8fc5daf5aff, UNASSIGN in 165 msec 2023-07-15 18:15:37,401 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689444937401"}]},"ts":"1689444937401"} 2023-07-15 18:15:37,402 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-15 18:15:37,404 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-15 18:15:37,406 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 177 msec 2023-07-15 18:15:37,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-15 18:15:37,533 INFO [Listener at localhost/32839] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-15 18:15:37,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-15 18:15:37,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-15 18:15:37,537 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-15 18:15:37,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-15 18:15:37,537 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-15 18:15:37,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:37,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:37,541 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:37,543 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff/cf1, FileablePath, hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff/recovered.edits] 2023-07-15 18:15:37,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-15 18:15:37,548 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff/recovered.edits/4.seqid to hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/archive/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff/recovered.edits/4.seqid 2023-07-15 18:15:37,549 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/.tmp/data/default/t1/104c4c2be77786e9983bd8fc5daf5aff 2023-07-15 18:15:37,549 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-15 18:15:37,551 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-15 18:15:37,553 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-15 18:15:37,554 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-15 18:15:37,555 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-15 18:15:37,555 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-15 18:15:37,556 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689444937555"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:37,557 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-15 18:15:37,557 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 104c4c2be77786e9983bd8fc5daf5aff, NAME => 't1,,1689444936510.104c4c2be77786e9983bd8fc5daf5aff.', STARTKEY => '', ENDKEY => ''}] 2023-07-15 18:15:37,557 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-15 18:15:37,557 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689444937557"}]},"ts":"9223372036854775807"} 2023-07-15 18:15:37,559 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-15 18:15:37,561 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-15 18:15:37,562 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 27 msec 2023-07-15 18:15:37,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-15 18:15:37,645 INFO [Listener at localhost/32839] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-15 18:15:37,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:37,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:37,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:37,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:37,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:37,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:37,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:37,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:37,664 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:37,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:37,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:37,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:37,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:37,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:37,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:37,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:47360 deadline: 1689446137674, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:37,675 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:37,679 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,680 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:37,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:37,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:37,699 INFO [Listener at localhost/32839] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=569 (was 560) - Thread LEAK? -, OpenFileDescriptor=860 (was 854) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=390 (was 398), ProcessCount=172 (was 172), AvailableMemoryMB=2664 (was 2653) - AvailableMemoryMB LEAK? - 2023-07-15 18:15:37,699 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-15 18:15:37,718 INFO [Listener at localhost/32839] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569, OpenFileDescriptor=860, MaxFileDescriptor=60000, SystemLoadAverage=390, ProcessCount=172, AvailableMemoryMB=2663 2023-07-15 18:15:37,719 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-15 18:15:37,719 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-15 18:15:37,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:37,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:37,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:37,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:37,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:37,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:37,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:37,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:37,733 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:37,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:37,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:37,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:37,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:37,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:37,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:37,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47360 deadline: 1689446137748, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:37,748 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:37,751 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,752 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:37,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:37,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:37,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-15 18:15:37,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:37,754 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-15 18:15:37,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-15 18:15:37,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-15 18:15:37,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:37,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:37,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:37,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:37,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:37,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:37,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:37,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:37,773 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:37,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:37,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:37,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:37,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:37,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:37,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:37,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47360 deadline: 1689446137782, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:37,783 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:37,785 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,786 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:37,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:37,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:37,813 INFO [Listener at localhost/32839] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571 (was 569) - Thread LEAK? -, OpenFileDescriptor=860 (was 860), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=390 (was 390), ProcessCount=172 (was 172), AvailableMemoryMB=2664 (was 2663) - AvailableMemoryMB LEAK? - 2023-07-15 18:15:37,813 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-15 18:15:37,834 INFO [Listener at localhost/32839] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=571, OpenFileDescriptor=860, MaxFileDescriptor=60000, SystemLoadAverage=390, ProcessCount=172, AvailableMemoryMB=2663 2023-07-15 18:15:37,834 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-15 18:15:37,835 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-15 18:15:37,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:37,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:37,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:37,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:37,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:37,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:37,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:37,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:37,850 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:37,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:37,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:37,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:37,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:37,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:37,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:37,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47360 deadline: 1689446137860, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:37,861 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:37,863 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,864 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:37,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:37,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:37,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:37,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:37,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:37,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:37,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:37,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:37,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:37,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:37,895 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:37,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:37,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:37,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:37,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:37,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:37,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:37,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47360 deadline: 1689446137910, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:37,911 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:37,913 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,915 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:37,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:37,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:37,943 INFO [Listener at localhost/32839] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572 (was 571) - Thread LEAK? -, OpenFileDescriptor=860 (was 860), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=390 (was 390), ProcessCount=172 (was 172), AvailableMemoryMB=2664 (was 2663) - AvailableMemoryMB LEAK? - 2023-07-15 18:15:37,943 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-15 18:15:37,967 INFO [Listener at localhost/32839] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572, OpenFileDescriptor=860, MaxFileDescriptor=60000, SystemLoadAverage=390, ProcessCount=172, AvailableMemoryMB=2663 2023-07-15 18:15:37,967 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-15 18:15:37,967 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-15 18:15:37,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:37,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:37,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:37,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:37,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:37,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:37,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:37,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:37,979 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:37,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:37,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:37,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:37,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:37,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:37,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:37,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47360 deadline: 1689446137989, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:37,989 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:37,991 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:37,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:37,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:37,992 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:37,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:37,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:37,993 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-15 18:15:37,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-15 18:15:37,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-15 18:15:37,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:37,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:37,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-15 18:15:37,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:38,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:38,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:38,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-15 18:15:38,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-15 18:15:38,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 18:15:38,010 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:38,012 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-15 18:15:38,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-15 18:15:38,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-15 18:15:38,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:38,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:47360 deadline: 1689446138110, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-15 18:15:38,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-15 18:15:38,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-15 18:15:38,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 18:15:38,152 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-15 18:15:38,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 26 msec 2023-07-15 18:15:38,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-15 18:15:38,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-15 18:15:38,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-15 18:15:38,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:38,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-15 18:15:38,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:38,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-15 18:15:38,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:38,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:38,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:38,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-15 18:15:38,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 18:15:38,275 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 18:15:38,277 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 18:15:38,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-15 18:15:38,279 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 18:15:38,280 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-15 18:15:38,281 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-15 18:15:38,281 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 18:15:38,283 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-15 18:15:38,284 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-15 18:15:38,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-15 18:15:38,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-15 18:15:38,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-15 18:15:38,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:38,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:38,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-15 18:15:38,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:38,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:38,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:38,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:38,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:47360 deadline: 1689444998391, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-15 18:15:38,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:38,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:38,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:38,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:38,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:38,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:38,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:38,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-15 18:15:38,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:38,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:38,400 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-15 18:15:38,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:38,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-15 18:15:38,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-15 18:15:38,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-15 18:15:38,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-15 18:15:38,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-15 18:15:38,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-15 18:15:38,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:38,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-15 18:15:38,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-15 18:15:38,409 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-15 18:15:38,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-15 18:15:38,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-15 18:15:38,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-15 18:15:38,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-15 18:15:38,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-15 18:15:38,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:38,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:38,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40787] to rsgroup master 2023-07-15 18:15:38,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-15 18:15:38,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47360 deadline: 1689446138418, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. 2023-07-15 18:15:38,418 WARN [Listener at localhost/32839] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40787 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-15 18:15:38,420 INFO [Listener at localhost/32839] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-15 18:15:38,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-15 18:15:38,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-15 18:15:38,421 INFO [Listener at localhost/32839] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32819, jenkins-hbase4.apache.org:38289, jenkins-hbase4.apache.org:42585, jenkins-hbase4.apache.org:45011], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-15 18:15:38,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-15 18:15:38,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40787] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-15 18:15:38,441 INFO [Listener at localhost/32839] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572 (was 572), OpenFileDescriptor=860 (was 860), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=390 (was 390), ProcessCount=172 (was 172), AvailableMemoryMB=2661 (was 2663) 2023-07-15 18:15:38,442 WARN [Listener at localhost/32839] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-15 18:15:38,442 INFO [Listener at localhost/32839] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-15 18:15:38,442 INFO [Listener at localhost/32839] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-15 18:15:38,442 DEBUG [Listener at localhost/32839] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0ac83e46 to 127.0.0.1:63689 2023-07-15 18:15:38,442 DEBUG [Listener at localhost/32839] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,442 DEBUG [Listener at localhost/32839] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-15 18:15:38,442 DEBUG [Listener at localhost/32839] util.JVMClusterUtil(257): Found active master hash=514671340, stopped=false 2023-07-15 18:15:38,442 DEBUG [Listener at localhost/32839] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-15 18:15:38,442 DEBUG [Listener at localhost/32839] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-15 18:15:38,442 INFO [Listener at localhost/32839] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:38,444 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:38,444 INFO [Listener at localhost/32839] procedure2.ProcedureExecutor(629): Stopping 2023-07-15 18:15:38,444 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:38,444 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:38,444 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:38,444 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-15 18:15:38,444 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:38,444 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:38,445 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:38,445 DEBUG [Listener at localhost/32839] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3f9a9fba to 127.0.0.1:63689 2023-07-15 18:15:38,445 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:38,445 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:38,445 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-15 18:15:38,445 DEBUG [Listener at localhost/32839] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,445 INFO [Listener at localhost/32839] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38289,1689444934501' ***** 2023-07-15 18:15:38,445 INFO [Listener at localhost/32839] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:38,445 INFO [Listener at localhost/32839] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32819,1689444934565' ***** 2023-07-15 18:15:38,445 INFO [Listener at localhost/32839] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:38,445 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:38,445 INFO [Listener at localhost/32839] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45011,1689444934762' ***** 2023-07-15 18:15:38,445 INFO [Listener at localhost/32839] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:38,445 INFO [Listener at localhost/32839] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42585,1689444936271' ***** 2023-07-15 18:15:38,445 INFO [Listener at localhost/32839] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-15 18:15:38,445 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:38,447 INFO [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:38,445 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:38,451 INFO [RS:0;jenkins-hbase4:38289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@43ec58c0{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:38,451 INFO [RS:2;jenkins-hbase4:45011] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2947125d{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:38,451 INFO [RS:1;jenkins-hbase4:32819] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@341f1006{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:38,452 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:38,452 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:38,452 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:38,452 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:38,453 INFO [RS:3;jenkins-hbase4:42585] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@283b6a7e{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-15 18:15:38,453 INFO [RS:2;jenkins-hbase4:45011] server.AbstractConnector(383): Stopped ServerConnector@6dcc53d9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:38,453 INFO [RS:0;jenkins-hbase4:38289] server.AbstractConnector(383): Stopped ServerConnector@452ba432{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:38,453 INFO [RS:1;jenkins-hbase4:32819] server.AbstractConnector(383): Stopped ServerConnector@3654aba1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:38,453 INFO [RS:0;jenkins-hbase4:38289] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:38,453 INFO [RS:3;jenkins-hbase4:42585] server.AbstractConnector(383): Stopped ServerConnector@6969afd9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:38,453 INFO [RS:2;jenkins-hbase4:45011] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:38,454 INFO [RS:3;jenkins-hbase4:42585] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:38,453 INFO [RS:1;jenkins-hbase4:32819] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:38,454 INFO [RS:0;jenkins-hbase4:38289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@383351f3{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:38,456 INFO [RS:1;jenkins-hbase4:32819] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7dc6a6e9{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:38,455 INFO [RS:2;jenkins-hbase4:45011] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@54037bb0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:38,457 INFO [RS:1;jenkins-hbase4:32819] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@272d5fe5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:38,457 INFO [RS:2;jenkins-hbase4:45011] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2aa6aee7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:38,455 INFO [RS:3;jenkins-hbase4:42585] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39ee291c{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:38,456 INFO [RS:0;jenkins-hbase4:38289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@260d4e64{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:38,458 INFO [RS:3;jenkins-hbase4:42585] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@320b0f5c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:38,458 INFO [RS:1;jenkins-hbase4:32819] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:38,459 INFO [RS:1;jenkins-hbase4:32819] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:38,459 INFO [RS:1;jenkins-hbase4:32819] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:38,459 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(3305): Received CLOSE for 79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:38,459 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:38,459 INFO [RS:0;jenkins-hbase4:38289] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:38,459 DEBUG [RS:1;jenkins-hbase4:32819] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0bcf76f8 to 127.0.0.1:63689 2023-07-15 18:15:38,459 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:38,459 DEBUG [RS:1;jenkins-hbase4:32819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,459 INFO [RS:0;jenkins-hbase4:38289] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:38,459 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 79d04ed49523ba28c3f52d06fb1d144a, disabling compactions & flushes 2023-07-15 18:15:38,459 INFO [RS:3;jenkins-hbase4:42585] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:38,459 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:38,459 INFO [RS:2;jenkins-hbase4:45011] regionserver.HeapMemoryManager(220): Stopping 2023-07-15 18:15:38,459 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:38,460 INFO [RS:2;jenkins-hbase4:45011] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:38,459 INFO [RS:3;jenkins-hbase4:42585] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-15 18:15:38,460 INFO [RS:2;jenkins-hbase4:45011] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:38,459 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-15 18:15:38,460 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(3305): Received CLOSE for 041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:38,459 INFO [RS:0;jenkins-hbase4:38289] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:38,460 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:38,459 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 18:15:38,460 DEBUG [RS:0;jenkins-hbase4:38289] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x102d08ba to 127.0.0.1:63689 2023-07-15 18:15:38,460 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1478): Online Regions={79d04ed49523ba28c3f52d06fb1d144a=hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a.} 2023-07-15 18:15:38,460 INFO [RS:3;jenkins-hbase4:42585] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-15 18:15:38,460 DEBUG [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1504): Waiting on 79d04ed49523ba28c3f52d06fb1d144a 2023-07-15 18:15:38,460 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:38,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. after waiting 0 ms 2023-07-15 18:15:38,460 INFO [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:38,460 DEBUG [RS:0;jenkins-hbase4:38289] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,460 DEBUG [RS:3;jenkins-hbase4:42585] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x75c266bb to 127.0.0.1:63689 2023-07-15 18:15:38,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:38,460 DEBUG [RS:3;jenkins-hbase4:42585] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,460 INFO [RS:0;jenkins-hbase4:38289] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:38,461 INFO [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42585,1689444936271; all regions closed. 2023-07-15 18:15:38,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 79d04ed49523ba28c3f52d06fb1d144a 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-15 18:15:38,461 INFO [RS:0;jenkins-hbase4:38289] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:38,461 INFO [RS:0;jenkins-hbase4:38289] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:38,461 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:38,461 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-15 18:15:38,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 041cc93b165c8cbb6d01c8a8caefe242, disabling compactions & flushes 2023-07-15 18:15:38,461 DEBUG [RS:2;jenkins-hbase4:45011] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1c3a3843 to 127.0.0.1:63689 2023-07-15 18:15:38,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:38,461 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 18:15:38,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:38,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-15 18:15:38,461 DEBUG [RS:2;jenkins-hbase4:45011] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,461 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-15 18:15:38,461 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-15 18:15:38,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. after waiting 0 ms 2023-07-15 18:15:38,461 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-15 18:15:38,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:38,462 DEBUG [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-15 18:15:38,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 041cc93b165c8cbb6d01c8a8caefe242 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-15 18:15:38,461 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1478): Online Regions={041cc93b165c8cbb6d01c8a8caefe242=hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242.} 2023-07-15 18:15:38,462 DEBUG [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1504): Waiting on 041cc93b165c8cbb6d01c8a8caefe242 2023-07-15 18:15:38,461 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-15 18:15:38,462 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-15 18:15:38,462 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-15 18:15:38,462 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-15 18:15:38,467 DEBUG [RS:3;jenkins-hbase4:42585] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs 2023-07-15 18:15:38,468 INFO [RS:3;jenkins-hbase4:42585] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42585%2C1689444936271:(num 1689444936483) 2023-07-15 18:15:38,468 DEBUG [RS:3;jenkins-hbase4:42585] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,468 INFO [RS:3;jenkins-hbase4:42585] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:38,468 INFO [RS:3;jenkins-hbase4:42585] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:38,468 INFO [RS:3;jenkins-hbase4:42585] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:38,468 INFO [RS:3;jenkins-hbase4:42585] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:38,468 INFO [RS:3;jenkins-hbase4:42585] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:38,468 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:38,470 INFO [RS:3;jenkins-hbase4:42585] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42585 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42585,1689444936271 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:38,472 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:38,472 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42585,1689444936271] 2023-07-15 18:15:38,472 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42585,1689444936271; numProcessing=1 2023-07-15 18:15:38,473 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42585,1689444936271 already deleted, retry=false 2023-07-15 18:15:38,473 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42585,1689444936271 expired; onlineServers=3 2023-07-15 18:15:38,474 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:38,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a/.tmp/m/7372d06f388649a5b314e654f9bc8d70 2023-07-15 18:15:38,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242/.tmp/info/290e3b96dcfc43b8b7318f057b3cc3bf 2023-07-15 18:15:38,484 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/.tmp/info/8faa0776a72442b5a36ed09fac6b2c95 2023-07-15 18:15:38,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 290e3b96dcfc43b8b7318f057b3cc3bf 2023-07-15 18:15:38,490 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8faa0776a72442b5a36ed09fac6b2c95 2023-07-15 18:15:38,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7372d06f388649a5b314e654f9bc8d70 2023-07-15 18:15:38,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242/.tmp/info/290e3b96dcfc43b8b7318f057b3cc3bf as hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242/info/290e3b96dcfc43b8b7318f057b3cc3bf 2023-07-15 18:15:38,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a/.tmp/m/7372d06f388649a5b314e654f9bc8d70 as hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a/m/7372d06f388649a5b314e654f9bc8d70 2023-07-15 18:15:38,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 290e3b96dcfc43b8b7318f057b3cc3bf 2023-07-15 18:15:38,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242/info/290e3b96dcfc43b8b7318f057b3cc3bf, entries=3, sequenceid=9, filesize=5.0 K 2023-07-15 18:15:38,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7372d06f388649a5b314e654f9bc8d70 2023-07-15 18:15:38,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a/m/7372d06f388649a5b314e654f9bc8d70, entries=12, sequenceid=29, filesize=5.4 K 2023-07-15 18:15:38,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 041cc93b165c8cbb6d01c8a8caefe242 in 36ms, sequenceid=9, compaction requested=false 2023-07-15 18:15:38,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 79d04ed49523ba28c3f52d06fb1d144a in 38ms, sequenceid=29, compaction requested=false 2023-07-15 18:15:38,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/namespace/041cc93b165c8cbb6d01c8a8caefe242/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-15 18:15:38,521 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:38,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 041cc93b165c8cbb6d01c8a8caefe242: 2023-07-15 18:15:38,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689444935691.041cc93b165c8cbb6d01c8a8caefe242. 2023-07-15 18:15:38,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/rsgroup/79d04ed49523ba28c3f52d06fb1d144a/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-15 18:15:38,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 18:15:38,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:38,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 79d04ed49523ba28c3f52d06fb1d144a: 2023-07-15 18:15:38,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689444935800.79d04ed49523ba28c3f52d06fb1d144a. 2023-07-15 18:15:38,525 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/.tmp/rep_barrier/7b27c3cc3da64f3f88dc1b300286bda1 2023-07-15 18:15:38,530 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7b27c3cc3da64f3f88dc1b300286bda1 2023-07-15 18:15:38,539 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/.tmp/table/5b8cf189cfd34e6680ee635b1cdc2703 2023-07-15 18:15:38,546 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5b8cf189cfd34e6680ee635b1cdc2703 2023-07-15 18:15:38,547 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/.tmp/info/8faa0776a72442b5a36ed09fac6b2c95 as hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/info/8faa0776a72442b5a36ed09fac6b2c95 2023-07-15 18:15:38,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8faa0776a72442b5a36ed09fac6b2c95 2023-07-15 18:15:38,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/info/8faa0776a72442b5a36ed09fac6b2c95, entries=22, sequenceid=26, filesize=7.3 K 2023-07-15 18:15:38,554 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/.tmp/rep_barrier/7b27c3cc3da64f3f88dc1b300286bda1 as hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/rep_barrier/7b27c3cc3da64f3f88dc1b300286bda1 2023-07-15 18:15:38,560 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7b27c3cc3da64f3f88dc1b300286bda1 2023-07-15 18:15:38,560 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/rep_barrier/7b27c3cc3da64f3f88dc1b300286bda1, entries=1, sequenceid=26, filesize=4.9 K 2023-07-15 18:15:38,561 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/.tmp/table/5b8cf189cfd34e6680ee635b1cdc2703 as hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/table/5b8cf189cfd34e6680ee635b1cdc2703 2023-07-15 18:15:38,567 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5b8cf189cfd34e6680ee635b1cdc2703 2023-07-15 18:15:38,567 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/table/5b8cf189cfd34e6680ee635b1cdc2703, entries=6, sequenceid=26, filesize=5.1 K 2023-07-15 18:15:38,568 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 106ms, sequenceid=26, compaction requested=false 2023-07-15 18:15:38,578 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-15 18:15:38,579 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-15 18:15:38,580 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:38,580 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-15 18:15:38,580 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-15 18:15:38,644 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:38,644 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:42585-0x1016a32661a000b, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:38,644 INFO [RS:3;jenkins-hbase4:42585] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42585,1689444936271; zookeeper connection closed. 2023-07-15 18:15:38,644 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2e925ce0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2e925ce0 2023-07-15 18:15:38,660 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32819,1689444934565; all regions closed. 2023-07-15 18:15:38,662 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38289,1689444934501; all regions closed. 2023-07-15 18:15:38,662 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45011,1689444934762; all regions closed. 2023-07-15 18:15:38,667 DEBUG [RS:1;jenkins-hbase4:32819] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs 2023-07-15 18:15:38,667 INFO [RS:1;jenkins-hbase4:32819] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32819%2C1689444934565:(num 1689444935494) 2023-07-15 18:15:38,667 DEBUG [RS:1;jenkins-hbase4:32819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,667 INFO [RS:1;jenkins-hbase4:32819] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:38,667 INFO [RS:1;jenkins-hbase4:32819] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:38,667 INFO [RS:1;jenkins-hbase4:32819] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:38,667 INFO [RS:1;jenkins-hbase4:32819] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:38,667 INFO [RS:1;jenkins-hbase4:32819] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:38,667 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:38,667 DEBUG [RS:0;jenkins-hbase4:38289] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs 2023-07-15 18:15:38,669 INFO [RS:0;jenkins-hbase4:38289] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38289%2C1689444934501.meta:.meta(num 1689444935623) 2023-07-15 18:15:38,668 INFO [RS:1;jenkins-hbase4:32819] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32819 2023-07-15 18:15:38,670 DEBUG [RS:2;jenkins-hbase4:45011] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs 2023-07-15 18:15:38,670 INFO [RS:2;jenkins-hbase4:45011] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45011%2C1689444934762:(num 1689444935489) 2023-07-15 18:15:38,670 DEBUG [RS:2;jenkins-hbase4:45011] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,670 INFO [RS:2;jenkins-hbase4:45011] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:38,670 INFO [RS:2;jenkins-hbase4:45011] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:38,670 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:38,670 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:38,670 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:38,670 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:38,670 INFO [RS:2;jenkins-hbase4:45011] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-15 18:15:38,670 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32819,1689444934565 2023-07-15 18:15:38,671 INFO [RS:2;jenkins-hbase4:45011] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-15 18:15:38,671 INFO [RS:2;jenkins-hbase4:45011] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-15 18:15:38,671 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32819,1689444934565] 2023-07-15 18:15:38,671 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32819,1689444934565; numProcessing=2 2023-07-15 18:15:38,672 INFO [RS:2;jenkins-hbase4:45011] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45011 2023-07-15 18:15:38,674 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:38,674 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45011,1689444934762 2023-07-15 18:15:38,674 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:38,674 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32819,1689444934565 already deleted, retry=false 2023-07-15 18:15:38,674 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32819,1689444934565 expired; onlineServers=2 2023-07-15 18:15:38,675 DEBUG [RS:0;jenkins-hbase4:38289] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/oldWALs 2023-07-15 18:15:38,675 INFO [RS:0;jenkins-hbase4:38289] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38289%2C1689444934501:(num 1689444935509) 2023-07-15 18:15:38,675 DEBUG [RS:0;jenkins-hbase4:38289] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,675 INFO [RS:0;jenkins-hbase4:38289] regionserver.LeaseManager(133): Closed leases 2023-07-15 18:15:38,676 INFO [RS:0;jenkins-hbase4:38289] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-15 18:15:38,676 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:38,677 INFO [RS:0;jenkins-hbase4:38289] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38289 2023-07-15 18:15:38,677 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45011,1689444934762] 2023-07-15 18:15:38,677 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45011,1689444934762; numProcessing=3 2023-07-15 18:15:38,678 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45011,1689444934762 already deleted, retry=false 2023-07-15 18:15:38,678 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45011,1689444934762 expired; onlineServers=1 2023-07-15 18:15:38,678 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38289,1689444934501 2023-07-15 18:15:38,678 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-15 18:15:38,679 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38289,1689444934501] 2023-07-15 18:15:38,679 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38289,1689444934501; numProcessing=4 2023-07-15 18:15:38,681 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38289,1689444934501 already deleted, retry=false 2023-07-15 18:15:38,681 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38289,1689444934501 expired; onlineServers=0 2023-07-15 18:15:38,681 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40787,1689444934445' ***** 2023-07-15 18:15:38,681 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-15 18:15:38,682 DEBUG [M:0;jenkins-hbase4:40787] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a92d5e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-15 18:15:38,682 INFO [M:0;jenkins-hbase4:40787] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-15 18:15:38,684 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-15 18:15:38,684 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-15 18:15:38,684 INFO [M:0;jenkins-hbase4:40787] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1c3f7275{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-15 18:15:38,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-15 18:15:38,684 INFO [M:0;jenkins-hbase4:40787] server.AbstractConnector(383): Stopped ServerConnector@4f43c59f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:38,685 INFO [M:0;jenkins-hbase4:40787] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-15 18:15:38,685 INFO [M:0;jenkins-hbase4:40787] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@745c340f{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-15 18:15:38,686 INFO [M:0;jenkins-hbase4:40787] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@32c1d2f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/hadoop.log.dir/,STOPPED} 2023-07-15 18:15:38,686 INFO [M:0;jenkins-hbase4:40787] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40787,1689444934445 2023-07-15 18:15:38,686 INFO [M:0;jenkins-hbase4:40787] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40787,1689444934445; all regions closed. 2023-07-15 18:15:38,686 DEBUG [M:0;jenkins-hbase4:40787] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-15 18:15:38,686 INFO [M:0;jenkins-hbase4:40787] master.HMaster(1491): Stopping master jetty server 2023-07-15 18:15:38,687 INFO [M:0;jenkins-hbase4:40787] server.AbstractConnector(383): Stopped ServerConnector@373c44fd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-15 18:15:38,687 DEBUG [M:0;jenkins-hbase4:40787] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-15 18:15:38,687 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-15 18:15:38,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444935207] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689444935207,5,FailOnTimeoutGroup] 2023-07-15 18:15:38,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444935207] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689444935207,5,FailOnTimeoutGroup] 2023-07-15 18:15:38,687 DEBUG [M:0;jenkins-hbase4:40787] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-15 18:15:38,687 INFO [M:0;jenkins-hbase4:40787] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-15 18:15:38,687 INFO [M:0;jenkins-hbase4:40787] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-15 18:15:38,687 INFO [M:0;jenkins-hbase4:40787] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-15 18:15:38,687 DEBUG [M:0;jenkins-hbase4:40787] master.HMaster(1512): Stopping service threads 2023-07-15 18:15:38,687 INFO [M:0;jenkins-hbase4:40787] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-15 18:15:38,688 ERROR [M:0;jenkins-hbase4:40787] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-15 18:15:38,688 INFO [M:0;jenkins-hbase4:40787] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-15 18:15:38,688 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-15 18:15:38,688 DEBUG [M:0;jenkins-hbase4:40787] zookeeper.ZKUtil(398): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-15 18:15:38,688 WARN [M:0;jenkins-hbase4:40787] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-15 18:15:38,688 INFO [M:0;jenkins-hbase4:40787] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-15 18:15:38,688 INFO [M:0;jenkins-hbase4:40787] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-15 18:15:38,688 DEBUG [M:0;jenkins-hbase4:40787] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-15 18:15:38,688 INFO [M:0;jenkins-hbase4:40787] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:38,688 DEBUG [M:0;jenkins-hbase4:40787] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:38,688 DEBUG [M:0;jenkins-hbase4:40787] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-15 18:15:38,688 DEBUG [M:0;jenkins-hbase4:40787] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:38,688 INFO [M:0;jenkins-hbase4:40787] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.22 KB heapSize=90.66 KB 2023-07-15 18:15:38,699 INFO [M:0;jenkins-hbase4:40787] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.22 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7f4a99b00e6e4760a5206c5808aa2816 2023-07-15 18:15:38,703 DEBUG [M:0;jenkins-hbase4:40787] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7f4a99b00e6e4760a5206c5808aa2816 as hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7f4a99b00e6e4760a5206c5808aa2816 2023-07-15 18:15:38,708 INFO [M:0;jenkins-hbase4:40787] regionserver.HStore(1080): Added hdfs://localhost:46849/user/jenkins/test-data/1d5748f7-9fa0-3d9b-9f39-51dbd79b1615/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7f4a99b00e6e4760a5206c5808aa2816, entries=22, sequenceid=175, filesize=11.1 K 2023-07-15 18:15:38,709 INFO [M:0;jenkins-hbase4:40787] regionserver.HRegion(2948): Finished flush of dataSize ~76.22 KB/78049, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-15 18:15:38,711 INFO [M:0;jenkins-hbase4:40787] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-15 18:15:38,711 DEBUG [M:0;jenkins-hbase4:40787] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-15 18:15:38,714 INFO [M:0;jenkins-hbase4:40787] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-15 18:15:38,714 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-15 18:15:38,714 INFO [M:0;jenkins-hbase4:40787] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40787 2023-07-15 18:15:38,716 DEBUG [M:0;jenkins-hbase4:40787] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40787,1689444934445 already deleted, retry=false 2023-07-15 18:15:39,245 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:39,245 INFO [M:0;jenkins-hbase4:40787] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40787,1689444934445; zookeeper connection closed. 2023-07-15 18:15:39,245 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): master:40787-0x1016a32661a0000, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:39,345 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:39,345 INFO [RS:0;jenkins-hbase4:38289] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38289,1689444934501; zookeeper connection closed. 2023-07-15 18:15:39,345 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:38289-0x1016a32661a0001, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:39,345 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@65a57e02] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@65a57e02 2023-07-15 18:15:39,445 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:39,445 INFO [RS:2;jenkins-hbase4:45011] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45011,1689444934762; zookeeper connection closed. 2023-07-15 18:15:39,445 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:45011-0x1016a32661a0003, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:39,446 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@60f641fa] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@60f641fa 2023-07-15 18:15:39,546 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:39,546 INFO [RS:1;jenkins-hbase4:32819] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32819,1689444934565; zookeeper connection closed. 2023-07-15 18:15:39,546 DEBUG [Listener at localhost/32839-EventThread] zookeeper.ZKWatcher(600): regionserver:32819-0x1016a32661a0002, quorum=127.0.0.1:63689, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-15 18:15:39,546 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2cbe3205] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2cbe3205 2023-07-15 18:15:39,546 INFO [Listener at localhost/32839] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-15 18:15:39,546 WARN [Listener at localhost/32839] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:39,552 INFO [Listener at localhost/32839] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:39,657 WARN [BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:39,657 WARN [BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-44005676-172.31.14.131-1689444933649 (Datanode Uuid 6e3252db-c583-4276-9093-20737de8da0e) service to localhost/127.0.0.1:46849 2023-07-15 18:15:39,657 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data5/current/BP-44005676-172.31.14.131-1689444933649] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:39,658 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data6/current/BP-44005676-172.31.14.131-1689444933649] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:39,659 WARN [Listener at localhost/32839] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:39,662 INFO [Listener at localhost/32839] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:39,765 WARN [BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:39,765 WARN [BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-44005676-172.31.14.131-1689444933649 (Datanode Uuid f6bad8fd-a126-4473-a4ab-68567571c96b) service to localhost/127.0.0.1:46849 2023-07-15 18:15:39,765 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data3/current/BP-44005676-172.31.14.131-1689444933649] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:39,766 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data4/current/BP-44005676-172.31.14.131-1689444933649] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:39,766 WARN [Listener at localhost/32839] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-15 18:15:39,769 INFO [Listener at localhost/32839] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:39,871 WARN [BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-15 18:15:39,871 WARN [BP-44005676-172.31.14.131-1689444933649 heartbeating to localhost/127.0.0.1:46849] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-44005676-172.31.14.131-1689444933649 (Datanode Uuid bb38fc4c-88be-4f8d-af8e-f7d4ec02f1c4) service to localhost/127.0.0.1:46849 2023-07-15 18:15:39,874 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data1/current/BP-44005676-172.31.14.131-1689444933649] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:39,874 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/0ac30712-ef8d-f88b-a7b3-7f2ab1648d8b/cluster_0b89b3c6-c299-04c4-2e79-7e6466d948e9/dfs/data/data2/current/BP-44005676-172.31.14.131-1689444933649] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-15 18:15:39,884 INFO [Listener at localhost/32839] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-15 18:15:39,999 INFO [Listener at localhost/32839] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-15 18:15:40,044 INFO [Listener at localhost/32839] hbase.HBaseTestingUtility(1293): Minicluster is down